INFO[2025-11-05T03:20:13Z] ci-operator version v20251104-e6cf94482 INFO[2025-11-05T03:20:13Z] Loading configuration from https://config.ci.openshift.org for openshift,openshift/origin,cluster-control-plane-machine-set-operator@main,main INFO[2025-11-05T03:20:15Z] Resolved source https://github.com/openshift/origin to main@560414aa, merging: #30452 a79fef53 @sunzhaohua2 INFO[2025-11-05T03:20:15Z] Resolved source https://github.com/openshift/cluster-control-plane-machine-set-operator to main@344babe6, merging: #363 e110a18b @sunzhaohua2 WARN[2025-11-05T03:20:15Z] skipped directory "..2025_11_05_03_18_14.997145995" when creating secret from directory "/secrets/ci-pull-credentials" INFO[2025-11-05T03:20:15Z] Loading information from https://config.ci.openshift.org for integrated stream ocp/4.21 INFO[2025-11-05T03:20:15Z] Loading information from https://config.ci.openshift.org for integrated stream ocp/4.21 INFO[2025-11-05T03:20:15Z] Building release initial from a snapshot of ocp/4.21 INFO[2025-11-05T03:20:15Z] Building release latest from a snapshot of ocp/4.21 INFO[2025-11-05T03:20:15Z] Requesting a release from https://arm64.ocp.releases.ci.openshift.org/api/v1/releasestream/4.21.0-0.nightly-arm64/latest INFO[2025-11-05T03:20:15Z] Resolved release nightly-arm64 to registry.ci.openshift.org/ocp-arm64/release-arm64:4.21.0-0.nightly-arm64-2025-11-04-171225 INFO[2025-11-05T03:20:16Z] Using namespace https://console.build04.ci.openshift.org/k8s/cluster/projects/ci-op-x0f88pwp INFO[2025-11-05T03:20:16Z] Setting arch for src-openshift.cluster-control-plane-machine-set-operator arch=amd64 reasons=cluster-control-plane-machine-set-operator-openshift.cluster-control-plane-machine-set-operator INFO[2025-11-05T03:20:16Z] Setting arch for src arch=amd64 reasons=hello-openshift, tests INFO[2025-11-05T03:20:16Z] Running [input:root-openshift.cluster-control-plane-machine-set-operator], [input:root], [input:ocp_4.16_base-rhel9], [input:ocp_builder_rhel-9-golang-1.24-openshift-4.21], [input:tools], [input:ocp_4.21_base-rhel9-openshift.cluster-control-plane-machine-set-operator], [input:ocp_builder_rhel-9-golang-1.22-openshift-4.17], [input:ocp_builder_rhel-9-golang-1.24-openshift-4.21-openshift.cluster-control-plane-machine-set-operator], [input:origin-centos-8], [input:ocp-4.12-upi-installer], [input:ocp-4.14-upi-installer], [input:ocp-4.16-upi-installer], [input:ocp-4.5-upi-installer], [release-inputs:initial], [release-inputs:latest], src-openshift.cluster-control-plane-machine-set-operator, src, hello-openshift, tests, cluster-control-plane-machine-set-operator-openshift.cluster-control-plane-machine-set-operator, [output:stable:tests], [output:stable:hello-openshift], [output:stable:cluster-control-plane-machine-set-operator], [images], [release:latest], e2e-gcp-disruptive INFO[2025-11-05T03:20:17Z] Loading information from https://config.ci.openshift.org for cluster profile gcp-openshift-gce-devel-ci-2 INFO[2025-11-05T03:20:17Z] Tagging openshift/release:rhel-9-release-golang-1.24-openshift-4.21 into pipeline:root. INFO[2025-11-05T03:20:17Z] Tagging ocp/4.21:tools into pipeline:tools. INFO[2025-11-05T03:20:17Z] Tagging openshift/release:rhel-9-release-golang-1.24-openshift-4.21 into pipeline:root-openshift.cluster-control-plane-machine-set-operator. INFO[2025-11-05T03:20:17Z] Tagging ocp/4.5:upi-installer into pipeline:ocp-4.5-upi-installer. INFO[2025-11-05T03:20:17Z] Tagging ocp/4.21:base-rhel9 into pipeline:ocp_4.21_base-rhel9-openshift.cluster-control-plane-machine-set-operator. INFO[2025-11-05T03:20:17Z] Tagging ocp/4.16:upi-installer into pipeline:ocp-4.16-upi-installer. INFO[2025-11-05T03:20:17Z] Tagging ocp/4.12:upi-installer into pipeline:ocp-4.12-upi-installer. INFO[2025-11-05T03:20:17Z] Tagging ocp/4.14:upi-installer into pipeline:ocp-4.14-upi-installer. INFO[2025-11-05T03:20:17Z] Tagging ocp/builder:rhel-9-golang-1.22-openshift-4.17 into pipeline:ocp_builder_rhel-9-golang-1.22-openshift-4.17. INFO[2025-11-05T03:20:17Z] Tagging ocp/builder:rhel-9-golang-1.24-openshift-4.21 into pipeline:ocp_builder_rhel-9-golang-1.24-openshift-4.21-openshift.cluster-control-plane-machine-set-operator. INFO[2025-11-05T03:20:17Z] Tagging ocp/4.21:base-rhel9 into pipeline:ocp_4.16_base-rhel9. INFO[2025-11-05T03:20:17Z] Tagging ocp/builder:rhel-9-golang-1.24-openshift-4.21 into pipeline:ocp_builder_rhel-9-golang-1.24-openshift-4.21. INFO[2025-11-05T03:20:17Z] Tagging origin/centos:8 into pipeline:origin-centos-8. INFO[2025-11-05T03:20:17Z] Waiting to import tags on imagestream (after taking snapshot) ci-op-x0f88pwp/stable ... INFO[2025-11-05T03:20:17Z] Waiting to import tags on imagestream (after taking snapshot) ci-op-x0f88pwp/stable-initial ... INFO[2025-11-05T03:20:30Z] Building src INFO[2025-11-05T03:20:30Z] Building src-openshift.cluster-control-plane-machine-set-operator INFO[2025-11-05T03:21:30Z] Created build "src-openshift.cluster-control-plane-machine-set-operator-amd64" INFO[2025-11-05T03:21:30Z] Created build "src-amd64" INFO[2025-11-05T03:23:32Z] Imported tags on imagestream (after taking snapshot) ci-op-x0f88pwp/stable INFO[2025-11-05T03:23:33Z] Imported tags on imagestream (after taking snapshot) ci-op-x0f88pwp/stable-initial INFO[2025-11-05T03:24:36Z] Build src-openshift.cluster-control-plane-machine-set-operator-amd64 succeeded after 3m6s INFO[2025-11-05T03:24:37Z] Retrieving digests of member images INFO[2025-11-05T03:24:38Z] Image ci-op-x0f88pwp/pipeline:src-openshift.cluster-control-plane-machine-set-operator created digest=sha256:a060e44529ad6b6a0880a03bf57ab9905bf0132511080b93690cc4bad555f1e5 for-build=src-openshift.cluster-control-plane-machine-set-operator INFO[2025-11-05T03:24:38Z] Building cluster-control-plane-machine-set-operator-openshift.cluster-control-plane-machine-set-operator INFO[2025-11-05T03:25:38Z] Created build "cluster-control-plane-machine-set-operator-openshift.cluster-control-plane-machine-set-operator-amd64" INFO[2025-11-05T03:29:51Z] Build cluster-control-plane-machine-set-operator-openshift.cluster-control-plane-machine-set-operator-amd64 succeeded after 4m13s INFO[2025-11-05T03:29:52Z] Retrieving digests of member images INFO[2025-11-05T03:29:53Z] Image ci-op-x0f88pwp/pipeline:cluster-control-plane-machine-set-operator-openshift.cluster-control-plane-machine-set-operator created digest=sha256:23bd99bf96acfea5f691a9cbae704ccb785fee63b3c188c0d34d893d8340e55d for-build=cluster-control-plane-machine-set-operator-openshift.cluster-control-plane-machine-set-operator INFO[2025-11-05T03:29:53Z] Tagging cluster-control-plane-machine-set-operator-openshift.cluster-control-plane-machine-set-operator into /stable:cluster-control-plane-machine-set-operator INFO[2025-11-05T03:31:00Z] Build src-amd64 succeeded after 9m30s INFO[2025-11-05T03:31:00Z] Retrieving digests of member images INFO[2025-11-05T03:31:01Z] Image ci-op-x0f88pwp/pipeline:src created digest=sha256:fbb4e3189bb3036785bbbf6da0c874ece064462f300319251669f27fea4b630f for-build=src INFO[2025-11-05T03:31:01Z] Building hello-openshift INFO[2025-11-05T03:31:01Z] Building tests INFO[2025-11-05T03:32:01Z] Created build "hello-openshift-amd64" INFO[2025-11-05T03:32:02Z] Created build "tests-amd64" INFO[2025-11-05T03:35:47Z] Build hello-openshift-amd64 succeeded after 3m45s INFO[2025-11-05T03:35:47Z] Retrieving digests of member images INFO[2025-11-05T03:35:49Z] Image ci-op-x0f88pwp/pipeline:hello-openshift created digest=sha256:477d194a4e8df74b6d88662407d85e14f9d0297f524b599d709c50693ad83588 for-build=hello-openshift INFO[2025-11-05T03:35:49Z] Tagging hello-openshift into stable INFO[2025-11-05T03:42:57Z] Build tests-amd64 succeeded after 10m55s INFO[2025-11-05T03:42:57Z] Retrieving digests of member images INFO[2025-11-05T03:42:59Z] Image ci-op-x0f88pwp/pipeline:tests created digest=sha256:a87b6b8bbc4170ca7bc40d4e65fc9abacf20e9273f0134c3d4ae3b755bb522ea for-build=tests INFO[2025-11-05T03:42:59Z] Tagging tests into stable INFO[2025-11-05T03:42:59Z] Creating release image registry.build04.ci.openshift.org/ci-op-x0f88pwp/release:latest. INFO[2025-11-05T03:44:37Z] Snapshot integration stream into release 4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest to tag release:latest INFO[2025-11-05T03:44:37Z] Acquiring leases for test e2e-gcp-disruptive: [gcp-openshift-gce-devel-ci-2-quota-slice] INFO[2025-11-05T03:44:37Z] Acquired 1 lease(s) for gcp-openshift-gce-devel-ci-2-quota-slice: [us-central1--gcp-openshift-gce-devel-ci-2-quota-slice-58] INFO[2025-11-05T03:44:37Z] Running multi-stage test e2e-gcp-disruptive INFO[2025-11-05T03:44:37Z] Running multi-stage phase pre INFO[2025-11-05T03:44:37Z] Running step e2e-gcp-disruptive-observers-resource-watch. INFO[2025-11-05T03:44:37Z] Running step e2e-gcp-disruptive-ipi-conf. INFO[2025-11-05T03:44:45Z] Step e2e-gcp-disruptive-ipi-conf succeeded after 7s. INFO[2025-11-05T03:44:45Z] Running step e2e-gcp-disruptive-ipi-conf-telemetry. INFO[2025-11-05T03:44:53Z] Step e2e-gcp-disruptive-ipi-conf-telemetry succeeded after 8s. INFO[2025-11-05T03:44:53Z] Running step e2e-gcp-disruptive-ipi-conf-gcp. INFO[2025-11-05T03:45:01Z] Step e2e-gcp-disruptive-ipi-conf-gcp succeeded after 8s. INFO[2025-11-05T03:45:01Z] Running step e2e-gcp-disruptive-ipi-install-monitoringpvc. INFO[2025-11-05T03:45:10Z] Step e2e-gcp-disruptive-ipi-install-monitoringpvc succeeded after 8s. INFO[2025-11-05T03:45:10Z] Running step e2e-gcp-disruptive-ipi-install-rbac. INFO[2025-11-05T03:45:17Z] Step e2e-gcp-disruptive-ipi-install-rbac succeeded after 7s. INFO[2025-11-05T03:45:17Z] Running step e2e-gcp-disruptive-openshift-cluster-bot-rbac. INFO[2025-11-05T03:45:25Z] Step e2e-gcp-disruptive-openshift-cluster-bot-rbac succeeded after 7s. INFO[2025-11-05T03:45:25Z] Running step e2e-gcp-disruptive-ipi-install-hosted-loki. INFO[2025-11-05T03:45:33Z] Step e2e-gcp-disruptive-ipi-install-hosted-loki succeeded after 7s. INFO[2025-11-05T03:45:33Z] Running step e2e-gcp-disruptive-ipi-install-install. INFO[2025-11-05T04:37:06Z] Step e2e-gcp-disruptive-ipi-install-install succeeded after 51m32s. INFO[2025-11-05T04:37:06Z] Running step e2e-gcp-disruptive-ipi-install-times-collection. INFO[2025-11-05T04:37:13Z] Step e2e-gcp-disruptive-ipi-install-times-collection succeeded after 7s. INFO[2025-11-05T04:37:13Z] Running step e2e-gcp-disruptive-nodes-readiness. INFO[2025-11-05T04:37:21Z] Step e2e-gcp-disruptive-nodes-readiness succeeded after 7s. INFO[2025-11-05T04:37:21Z] Running step e2e-gcp-disruptive-multiarch-validate-nodes. INFO[2025-11-05T04:37:29Z] Step e2e-gcp-disruptive-multiarch-validate-nodes succeeded after 7s. INFO[2025-11-05T04:37:29Z] Step phase pre succeeded after 52m51s. INFO[2025-11-05T04:37:29Z] Running multi-stage phase test INFO[2025-11-05T04:37:29Z] Running step e2e-gcp-disruptive-openshift-e2e-test. INFO[2025-11-05T08:47:41Z] Logs for container test in pod e2e-gcp-disruptive-openshift-e2e-test: INFO[2025-11-05T08:47:41Z] Granting access for image pulling from the build farm... clusterrole.rbac.authorization.k8s.io/system:image-puller added: "system:unauthenticated" secret/support created Setting up ssh bastion error: the server doesn't have a resource type "ssh-bastion" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 4301 100 4301 0 0 24718 0 --:--:-- --:--:-- --:--:-- 24718 + set -e + SSH_BASTION_NAMESPACE=test-ssh-bastion + BASEDIR=https://raw.githubusercontent.com/eparis/ssh-bastion/master/deploy + trap clean_up EXIT + oc apply -f - namespace/test-ssh-bastion created + oc apply -f https://raw.githubusercontent.com/eparis/ssh-bastion/master/deploy/clusterrole.yaml clusterrole.rbac.authorization.k8s.io/ssh-bastion created + dry_run_flag=--dry-run=client + oc create --help + grep dry-run=false + oc create clusterrolebinding ssh-bastion --clusterrole=ssh-bastion --user=system:serviceaccount:test-ssh-bastion:ssh-bastion -o yaml --dry-run=client + oc apply -f - clusterrolebinding.rbac.authorization.k8s.io/ssh-bastion created + oc -n test-ssh-bastion apply -f https://raw.githubusercontent.com/eparis/ssh-bastion/master/deploy/service.yaml service/ssh-bastion created + oc -n test-ssh-bastion get secret ssh-host-keys + create_host_keys ++ mktemp -u + RSATMP=/tmp/tmp.9eSGeR0dU3 + /usr/bin/ssh-keygen -q -t rsa -f /tmp/tmp.9eSGeR0dU3 -C '' -N '' ++ mktemp -u + ECDSATMP=/tmp/tmp.ZYmPHsQ2vo + /usr/bin/ssh-keygen -q -t ecdsa -f /tmp/tmp.ZYmPHsQ2vo -C '' -N '' ++ mktemp -u + ED25519TMP=/tmp/tmp.n9z3zhnIiC + /usr/bin/ssh-keygen -q -t ed25519 -f /tmp/tmp.n9z3zhnIiC -C '' -N '' ++ mktemp + CONFIGFILE=/tmp/tmp.mM6Y60mmCP + echo 'HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key HostKey /etc/ssh/ssh_host_ed25519_key SyslogFacility AUTHPRIV PermitRootLogin no AuthorizedKeysFile /home/core/.ssh/authorized_keys PasswordAuthentication no ChallengeResponseAuthentication no GSSAPIAuthentication yes GSSAPICleanupCredentials no UsePAM yes X11Forwarding yes PrintMotd no AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS Subsystem sftp /usr/libexec/openssh/sftp-server ' + oc -n test-ssh-bastion create secret generic ssh-host-keys --from-file=ssh_host_rsa_key=/tmp/tmp.9eSGeR0dU3,ssh_host_ecdsa_key=/tmp/tmp.ZYmPHsQ2vo,ssh_host_ed25519_key=/tmp/tmp.n9z3zhnIiC,sshd_config=/tmp/tmp.mM6Y60mmCP secret/ssh-host-keys created + oc -n test-ssh-bastion apply -f https://raw.githubusercontent.com/eparis/ssh-bastion/master/deploy/serviceaccount.yaml serviceaccount/ssh-bastion created + oc -n test-ssh-bastion apply -f https://raw.githubusercontent.com/eparis/ssh-bastion/master/deploy/role.yaml role.rbac.authorization.k8s.io/ssh-bastion created + oc -n test-ssh-bastion create rolebinding ssh-bastion --clusterrole=ssh-bastion --user=system:serviceaccount:test-ssh-bastion:ssh-bastion -o yaml --dry-run=client + oc apply -f - rolebinding.rbac.authorization.k8s.io/ssh-bastion created + oc -n test-ssh-bastion apply -f https://raw.githubusercontent.com/eparis/ssh-bastion/master/deploy/deployment.yaml deployment.apps/ssh-bastion created + retry=120 + '[' 120 -ge 0 ']' + retry=119 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 119 -ge 0 ']' + retry=118 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 118 -ge 0 ']' + retry=117 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 117 -ge 0 ']' + retry=116 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 116 -ge 0 ']' + retry=115 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 115 -ge 0 ']' + retry=114 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 114 -ge 0 ']' + retry=113 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 113 -ge 0 ']' + retry=112 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 112 -ge 0 ']' + retry=111 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 111 -ge 0 ']' + retry=110 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 110 -ge 0 ']' + retry=109 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 109 -ge 0 ']' + retry=108 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 108 -ge 0 ']' + retry=107 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 107 -ge 0 ']' + retry=106 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 106 -ge 0 ']' + retry=105 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 105 -ge 0 ']' + retry=104 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 104 -ge 0 ']' + retry=103 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 103 -ge 0 ']' + retry=102 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 102 -ge 0 ']' + retry=101 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 101 -ge 0 ']' + retry=100 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 100 -ge 0 ']' + retry=99 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 99 -ge 0 ']' + retry=98 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 98 -ge 0 ']' + retry=97 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip= + '[' -n '' ']' + sleep 1 + '[' 97 -ge 0 ']' + retry=96 ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].hostname}' + bastion_host= + '[' -n '' ']' ++ oc get service -n test-ssh-bastion ssh-bastion -o 'jsonpath={.status.loadBalancer.ingress[0].ip}' + bastion_ip=35.184.143.160 + '[' -n 35.184.143.160 ']' + break + '[' -n '' ']' + bastion_host=35.184.143.160 + echo 'The bastion address is 35.184.143.160' The bastion address is 35.184.143.160 + echo 'You may want to use https://raw.githubusercontent.com/eparis/ssh-bastion/master/ssh.sh to easily ssh through the bastion to specific nodes.' You may want to use https://raw.githubusercontent.com/eparis/ssh-bastion/master/ssh.sh to easily ssh through the bastion to specific nodes. + clean_up + ARG=0 + rm -f /tmp/tmp.9eSGeR0dU3 /tmp/tmp.9eSGeR0dU3.pub + rm -f /tmp/tmp.ZYmPHsQ2vo /tmp/tmp.ZYmPHsQ2vo.pub + rm -f /tmp/tmp.n9z3zhnIiC /tmp/tmp.n9z3zhnIiC.pub + rm -f /tmp/tmp.mM6Y60mmCP + exit 0 /tmp /tmp/output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 82.3M 100 82.3M 0 0 174M 0 --:--:-- --:--:-- --:--:-- 174M Activated service account credentials for: [ci-provisioner-2@XXXXXXXXXXXXXXXXXXXXXXXX.iam.gserviceaccount.com] Updated property [core/project]. /tmp/output configmap/admin-acks patched clusterversion.config.openshift.io/version condition met Wed Nov 5 04:38:25 UTC 2025 - node count (6) now matches or exceeds machine count (6) Wed Nov 5 04:38:25 UTC 2025 - waiting for nodes to be ready... node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 condition met node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 condition met node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 condition met node/ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 condition met node/ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt condition met node/ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr condition met Wed Nov 5 04:38:26 UTC 2025 - all nodes are ready Wed Nov 5 04:38:26 UTC 2025 - waiting for clusteroperators to finish progressing... clusteroperator.config.openshift.io/authentication condition met clusteroperator.config.openshift.io/baremetal condition met clusteroperator.config.openshift.io/cloud-controller-manager condition met clusteroperator.config.openshift.io/cloud-credential condition met clusteroperator.config.openshift.io/cluster-autoscaler condition met clusteroperator.config.openshift.io/config-operator condition met clusteroperator.config.openshift.io/console condition met clusteroperator.config.openshift.io/control-plane-machine-set condition met clusteroperator.config.openshift.io/csi-snapshot-controller condition met clusteroperator.config.openshift.io/dns condition met clusteroperator.config.openshift.io/etcd condition met clusteroperator.config.openshift.io/image-registry condition met clusteroperator.config.openshift.io/ingress condition met clusteroperator.config.openshift.io/insights condition met clusteroperator.config.openshift.io/kube-apiserver condition met clusteroperator.config.openshift.io/kube-controller-manager condition met clusteroperator.config.openshift.io/kube-scheduler condition met clusteroperator.config.openshift.io/kube-storage-version-migrator condition met clusteroperator.config.openshift.io/machine-api condition met clusteroperator.config.openshift.io/machine-approver condition met clusteroperator.config.openshift.io/machine-config condition met clusteroperator.config.openshift.io/marketplace condition met clusteroperator.config.openshift.io/monitoring condition met clusteroperator.config.openshift.io/network condition met clusteroperator.config.openshift.io/node-tuning condition met clusteroperator.config.openshift.io/olm condition met clusteroperator.config.openshift.io/openshift-apiserver condition met clusteroperator.config.openshift.io/openshift-controller-manager condition met clusteroperator.config.openshift.io/openshift-samples condition met clusteroperator.config.openshift.io/operator-lifecycle-manager condition met clusteroperator.config.openshift.io/operator-lifecycle-manager-catalog condition met clusteroperator.config.openshift.io/operator-lifecycle-manager-packageserver condition met clusteroperator.config.openshift.io/service-ca condition met clusteroperator.config.openshift.io/storage condition met Wed Nov 5 04:38:31 UTC 2025 - all clusteroperators are done progressing. Wed Nov 5 04:38:31 UTC 2025 - waiting for oc adm wait-for-stable-cluster... Wed Nov 5 04:40:41 UTC 2025 - oc adm reports cluster is stable. [Wed Nov 5 04:40:41 UTC 2025] waiting for non-samples imagesteams to import... [Wed Nov 5 04:40:41 UTC 2025] All imagestreams are imported. + openshift-tests run openshift/disruptive --provider '{"type":"gce","region":"us-central1","multizone": true,"multimaster":true,"projectid":"XXXXXXXXXXXXXXXXXXXXXXXX"}' -o /logs/artifacts/e2e.log --junit-dir /logs/artifacts/junit I1105 04:40:41.922890 1669 factory.go:195] Registered Plugin "containerd" I1105 04:40:41.955942 1669 i18n.go:119] Couldn't find the LC_ALL, LC_MESSAGES or LANG environment variables, defaulting to en_US time="2025-11-05T04:40:41Z" level=warning msg="ENABLE_STORAGE_GCE_PD_DRIVER is set, but is not supported" I1105 04:40:42.455770 1669 binary.go:77] Found 8499 test specs I1105 04:40:42.459506 1669 binary.go:94] 1049 test specs remain, after filtering out k8s openshift-tests v4.1.0-10286-gc82b843 time="2025-11-05T04:40:42Z" level=info msg="Using env RELEASE_IMAGE_LATEST for release image \"registry.build04.ci.openshift.org/ci-op-x0f88pwp/release@sha256:4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c\"" time="2025-11-05T04:40:42Z" level=info msg="Detected /run/secrets/ci.openshift.io/cluster-profile/pull-secret; using cluster profile for image access" time="2025-11-05T04:40:42Z" level=info msg="Cleaning up older cached data..." time="2025-11-05T04:40:42Z" level=warning msg="Failed to read cache directory '/tmp/home/.cache/openshift-tests': open /tmp/home/.cache/openshift-tests: no such file or directory" time="2025-11-05T04:40:42Z" level=info msg="External binary cache is enabled" cache_dir=/tmp/home/.cache/openshift-tests time="2025-11-05T04:40:42Z" level=info msg="Using path for binaries /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e" time="2025-11-05T04:40:42Z" level=info msg="Run image extract for release image \"registry.build04.ci.openshift.org/ci-op-x0f88pwp/release@sha256:4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c\" and src \"/release-manifests/image-references\"" time="2025-11-05T04:40:49Z" level=info msg="Completed image extract for release image \"registry.build04.ci.openshift.org/ci-op-x0f88pwp/release@sha256:4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c\" in 7.311215259s" time="2025-11-05T04:40:49Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:b7f50a6baa5898af287af4ccb0ed7defaf03e10d2b5349a43774c61daa12eb3e\" and src \"/usr/bin/k8s-tests-ext.gz\"" time="2025-11-05T04:40:49Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:4e6477ea02fba423aace83bf83dfb3bf9170cd3136373ba8ecfd5dfdc0884add\" and src \"/usr/bin/machine-config-tests-ext.gz\"" time="2025-11-05T04:40:49Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:2b4a7094f94bb39adc6827f1d01aa1ef3734eff3d3f87d18b9a3641f111dae14\" and src \"/usr/bin/cluster-kube-apiserver-operator-tests-ext.gz\"" time="2025-11-05T04:40:49Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:23fde74e22f52dbcee5dc551cc94c8b8d47defc54bc99403fed6d3f312563712\" and src \"/usr/bin/olmv1-tests-ext.gz\"" time="2025-11-05T04:40:49Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:5ffaae14de89aefb88a30db932964b651b8956c76cafe3b702bad3d5cbbd24b7\" and src \"/usr/bin/cluster-openshift-apiserver-operator-tests-ext.gz\"" time="2025-11-05T04:40:49Z" level=info msg="Run image extract for release image \"registry.build04.ci.openshift.org/ci-op-x0f88pwp/stable@sha256:23bd99bf96acfea5f691a9cbae704ccb785fee63b3c188c0d34d893d8340e55d\" and src \"/control-plane-machine-set-tests-ext.gz\"" time="2025-11-05T04:40:49Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:53efe6392736969a3b163be376a89aa6731db753d034300fed3f95489178910f\" and src \"/usr/bin/cluster-monitoring-operator-tests-ext.gz\"" time="2025-11-05T04:40:49Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:aa51ccb8918949be5ba6f67cafab06f513a704761e5a4bad832740143b1fcebc\" and src \"/usr/bin/cluster-storage-operator-tests-ext.gz\"" time="2025-11-05T04:40:49Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:4139ff3d425af304243a5b251be8a08b0388458ebd6752e91ad983e415eb04eb\" and src \"/usr/bin/openshift-apiserver-tests-ext.gz\"" time="2025-11-05T04:40:49Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:164992a5bcb6e9f13a431623636fd134176031fd9e30959a356a3e9a2e001bfa\" and src \"/machine-api-tests-ext.gz\"" time="2025-11-05T04:40:57Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:53efe6392736969a3b163be376a89aa6731db753d034300fed3f95489178910f\" in 7.778386612s" time="2025-11-05T04:40:57Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:aa51ccb8918949be5ba6f67cafab06f513a704761e5a4bad832740143b1fcebc\" in 7.953200847s" time="2025-11-05T04:40:57Z" level=info msg="Extracted /usr/bin/cluster-monitoring-operator-tests-ext.gz for tag cluster-monitoring-operator from quay-proxy.ci.openshift.org/openshift/ci@sha256:53efe6392736969a3b163be376a89aa6731db753d034300fed3f95489178910f (disk size 21519980, extraction duration 7.778447102s)" time="2025-11-05T04:40:57Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:0ebeb6e774700507cc97ce2888745f0087a6e8839af5f36fdae7967be7049335\" and src \"/usr/bin/oauth-apiserver-tests-ext.gz\"" time="2025-11-05T04:40:57Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:5ffaae14de89aefb88a30db932964b651b8956c76cafe3b702bad3d5cbbd24b7\" in 8.192233905s" time="2025-11-05T04:40:58Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:23fde74e22f52dbcee5dc551cc94c8b8d47defc54bc99403fed6d3f312563712\" in 8.236684655s" time="2025-11-05T04:40:58Z" level=info msg="Extracted /usr/bin/cluster-openshift-apiserver-operator-tests-ext.gz for tag cluster-openshift-apiserver-operator from quay-proxy.ci.openshift.org/openshift/ci@sha256:5ffaae14de89aefb88a30db932964b651b8956c76cafe3b702bad3d5cbbd24b7 (disk size 21876500, extraction duration 8.192291553s)" time="2025-11-05T04:40:58Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:c3a0cc1e4b8c90efabd777b4fe72d146fe4cf8f5d2777946f1e2c284e1622e36\" and src \"/usr/bin/service-ca-operator-tests-ext.gz\"" time="2025-11-05T04:40:58Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:2b4a7094f94bb39adc6827f1d01aa1ef3734eff3d3f87d18b9a3641f111dae14\" in 8.552743954s" time="2025-11-05T04:40:58Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:4139ff3d425af304243a5b251be8a08b0388458ebd6752e91ad983e415eb04eb\" in 8.682191036s" time="2025-11-05T04:40:58Z" level=info msg="Completed image extract for release image \"registry.build04.ci.openshift.org/ci-op-x0f88pwp/stable@sha256:23bd99bf96acfea5f691a9cbae704ccb785fee63b3c188c0d34d893d8340e55d\" in 8.733763012s" time="2025-11-05T04:40:58Z" level=info msg="Extracted /usr/bin/cluster-storage-operator-tests-ext.gz for tag cluster-storage-operator from quay-proxy.ci.openshift.org/openshift/ci@sha256:aa51ccb8918949be5ba6f67cafab06f513a704761e5a4bad832740143b1fcebc (disk size 76959496, extraction duration 7.953250661s)" time="2025-11-05T04:40:58Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:a9721c6e61db711562fbb0412bc477d4c31ed6cadb4fe49ecf0b06ccc3635543\" and src \"/usr/bin/cluster-kube-controller-manager-operator-tests-ext.gz\"" time="2025-11-05T04:40:58Z" level=info msg="Extracted /usr/bin/cluster-kube-apiserver-operator-tests-ext.gz for tag cluster-kube-apiserver-operator from quay-proxy.ci.openshift.org/openshift/ci@sha256:2b4a7094f94bb39adc6827f1d01aa1ef3734eff3d3f87d18b9a3641f111dae14 (disk size 22510557, extraction duration 8.552885233s)" time="2025-11-05T04:40:58Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:f99c3c3569e6810299ead1f9546dcf4697274f4ff56ba27556f2124528e21136\" and src \"/usr/bin/cluster-kube-storage-version-migrator-operator-tests-ext.gz\"" time="2025-11-05T04:40:58Z" level=info msg="Extracted /usr/bin/openshift-apiserver-tests-ext.gz for tag openshift-apiserver from quay-proxy.ci.openshift.org/openshift/ci@sha256:4139ff3d425af304243a5b251be8a08b0388458ebd6752e91ad983e415eb04eb (disk size 21878335, extraction duration 8.682251335s)" time="2025-11-05T04:40:58Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:0362e016037e2f8802c875dc7938e7057c50963e547b2b6aafaa9717c54154d6\" and src \"/usr/bin/olmv0-tests-ext.gz\"" time="2025-11-05T04:40:59Z" level=info msg="Extracted /usr/bin/olmv1-tests-ext.gz for tag olm-operator-controller from quay-proxy.ci.openshift.org/openshift/ci@sha256:23fde74e22f52dbcee5dc551cc94c8b8d47defc54bc99403fed6d3f312563712 (disk size 113074504, extraction duration 8.236744394s)" time="2025-11-05T04:40:59Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:8f6303b0db8f84f48db90f4cdc66b656d41efb6e946865ddfd3e626d16eaeb2d\" and src \"/usr/bin/cluster-openshift-controller-manager-operator-tests-ext.gz\"" time="2025-11-05T04:40:59Z" level=info msg="Extracted /control-plane-machine-set-tests-ext.gz for tag cluster-control-plane-machine-set-operator from registry.build04.ci.openshift.org/ci-op-x0f88pwp/stable@sha256:23bd99bf96acfea5f691a9cbae704ccb785fee63b3c188c0d34d893d8340e55d (disk size 69214048, extraction duration 8.733812846s)" time="2025-11-05T04:40:59Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:1e8d9b203c5563aab67e6bdb7201d2378066f76225ad1b43274b7a3a1ce24c82\" and src \"/usr/bin/openshift-controller-manager-tests-ext.gz\"" time="2025-11-05T04:41:00Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:b7f50a6baa5898af287af4ccb0ed7defaf03e10d2b5349a43774c61daa12eb3e\" in 10.700702537s" time="2025-11-05T04:41:01Z" level=info msg="Extracted /usr/bin/k8s-tests-ext.gz for tag hyperkube from quay-proxy.ci.openshift.org/openshift/ci@sha256:b7f50a6baa5898af287af4ccb0ed7defaf03e10d2b5349a43774c61daa12eb3e (disk size 130573224, extraction duration 10.700771544s)" time="2025-11-05T04:41:01Z" level=info msg="Run image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:4eef1d85d75485af67c22eb9976c5a17c48941c1a052ded49fd09f69f3a173ae\" and src \"/usr/bin/cluster-config-operator-tests-ext.gz\"" time="2025-11-05T04:41:02Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:164992a5bcb6e9f13a431623636fd134176031fd9e30959a356a3e9a2e001bfa\" in 12.295442621s" time="2025-11-05T04:41:03Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:4e6477ea02fba423aace83bf83dfb3bf9170cd3136373ba8ecfd5dfdc0884add\" in 13.757082913s" time="2025-11-05T04:41:04Z" level=info msg="Extracted /machine-api-tests-ext.gz for tag machine-api-operator from quay-proxy.ci.openshift.org/openshift/ci@sha256:164992a5bcb6e9f13a431623636fd134176031fd9e30959a356a3e9a2e001bfa (disk size 207912936, extraction duration 12.295503082s)" time="2025-11-05T04:41:04Z" level=info msg="Extracted /usr/bin/machine-config-tests-ext.gz for tag machine-config-operator from quay-proxy.ci.openshift.org/openshift/ci@sha256:4e6477ea02fba423aace83bf83dfb3bf9170cd3136373ba8ecfd5dfdc0884add (disk size 95297776, extraction duration 13.757143018s)" time="2025-11-05T04:41:05Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:0ebeb6e774700507cc97ce2888745f0087a6e8839af5f36fdae7967be7049335\" in 7.794025915s" time="2025-11-05T04:41:05Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:c3a0cc1e4b8c90efabd777b4fe72d146fe4cf8f5d2777946f1e2c284e1622e36\" in 7.403869236s" time="2025-11-05T04:41:05Z" level=info msg="Extracted /usr/bin/oauth-apiserver-tests-ext.gz for tag oauth-apiserver from quay-proxy.ci.openshift.org/openshift/ci@sha256:0ebeb6e774700507cc97ce2888745f0087a6e8839af5f36fdae7967be7049335 (disk size 21865152, extraction duration 7.794113725s)" time="2025-11-05T04:41:05Z" level=info msg="Extracted /usr/bin/service-ca-operator-tests-ext.gz for tag service-ca-operator from quay-proxy.ci.openshift.org/openshift/ci@sha256:c3a0cc1e4b8c90efabd777b4fe72d146fe4cf8f5d2777946f1e2c284e1622e36 (disk size 22515032, extraction duration 7.403925289s)" time="2025-11-05T04:41:06Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:f99c3c3569e6810299ead1f9546dcf4697274f4ff56ba27556f2124528e21136\" in 7.636026364s" time="2025-11-05T04:41:06Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:a9721c6e61db711562fbb0412bc477d4c31ed6cadb4fe49ecf0b06ccc3635543\" in 7.903739748s" time="2025-11-05T04:41:06Z" level=info msg="Extracted /usr/bin/cluster-kube-storage-version-migrator-operator-tests-ext.gz for tag cluster-kube-storage-version-migrator-operator from quay-proxy.ci.openshift.org/openshift/ci@sha256:f99c3c3569e6810299ead1f9546dcf4697274f4ff56ba27556f2124528e21136 (disk size 21238023, extraction duration 7.636098444s)" time="2025-11-05T04:41:06Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:1e8d9b203c5563aab67e6bdb7201d2378066f76225ad1b43274b7a3a1ce24c82\" in 7.371236673s" time="2025-11-05T04:41:06Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:8f6303b0db8f84f48db90f4cdc66b656d41efb6e946865ddfd3e626d16eaeb2d\" in 7.501370728s" time="2025-11-05T04:41:06Z" level=info msg="Extracted /usr/bin/cluster-kube-controller-manager-operator-tests-ext.gz for tag cluster-kube-controller-manager-operator from quay-proxy.ci.openshift.org/openshift/ci@sha256:a9721c6e61db711562fbb0412bc477d4c31ed6cadb4fe49ecf0b06ccc3635543 (disk size 22499428, extraction duration 7.90382837s)" time="2025-11-05T04:41:06Z" level=info msg="Extracted /usr/bin/openshift-controller-manager-tests-ext.gz for tag openshift-controller-manager from quay-proxy.ci.openshift.org/openshift/ci@sha256:1e8d9b203c5563aab67e6bdb7201d2378066f76225ad1b43274b7a3a1ce24c82 (disk size 21900766, extraction duration 7.371282906s)" time="2025-11-05T04:41:06Z" level=info msg="Extracted /usr/bin/cluster-openshift-controller-manager-operator-tests-ext.gz for tag cluster-openshift-controller-manager-operator from quay-proxy.ci.openshift.org/openshift/ci@sha256:8f6303b0db8f84f48db90f4cdc66b656d41efb6e946865ddfd3e626d16eaeb2d (disk size 21913985, extraction duration 7.501417847s)" time="2025-11-05T04:41:08Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:4eef1d85d75485af67c22eb9976c5a17c48941c1a052ded49fd09f69f3a173ae\" in 6.740522348s" time="2025-11-05T04:41:08Z" level=info msg="Extracted /usr/bin/cluster-config-operator-tests-ext.gz for tag cluster-config-operator from quay-proxy.ci.openshift.org/openshift/ci@sha256:4eef1d85d75485af67c22eb9976c5a17c48941c1a052ded49fd09f69f3a173ae (disk size 21868960, extraction duration 6.740580906s)" time="2025-11-05T04:41:09Z" level=info msg="Completed image extract for release image \"quay-proxy.ci.openshift.org/openshift/ci@sha256:0362e016037e2f8802c875dc7938e7057c50963e547b2b6aafaa9717c54154d6\" in 10.479719409s" time="2025-11-05T04:41:10Z" level=info msg="Extracted /usr/bin/olmv0-tests-ext.gz for tag operator-lifecycle-manager from quay-proxy.ci.openshift.org/openshift/ci@sha256:0362e016037e2f8802c875dc7938e7057c50963e547b2b6aafaa9717c54154d6 (disk size 110504032, extraction duration 10.479769485s)" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for cluster-openshift-apiserver-operator-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for cluster-monitoring-operator-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for cluster-storage-operator-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for openshift-tests" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for cluster-monitoring-operator-tests-ext in 7.370151ms" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for cluster-openshift-apiserver-operator-tests-ext in 7.45336ms" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for openshift-apiserver-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for cluster-kube-apiserver-operator-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for openshift-apiserver-tests-ext in 8.514311ms" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for cluster-kube-apiserver-operator-tests-ext in 8.56196ms" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for control-plane-machine-set-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for olmv1-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for cluster-storage-operator-tests-ext in 30.980838ms" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for k8s-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for control-plane-machine-set-tests-ext in 35.310925ms" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for machine-api-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for olmv1-tests-ext in 65.412169ms" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for machine-config-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for machine-config-tests-ext in 88.463204ms" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for oauth-apiserver-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for oauth-apiserver-tests-ext in 10.138316ms" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for service-ca-operator-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for service-ca-operator-tests-ext in 9.492618ms" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for cluster-kube-storage-version-migrator-operator-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for cluster-kube-storage-version-migrator-operator-tests-ext in 7.9439ms" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for cluster-kube-controller-manager-operator-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for cluster-kube-controller-manager-operator-tests-ext in 8.334035ms" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for openshift-controller-manager-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for openshift-controller-manager-tests-ext in 6.384196ms" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for cluster-openshift-controller-manager-operator-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for cluster-openshift-controller-manager-operator-tests-ext in 6.776372ms" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for cluster-config-operator-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for cluster-config-operator-tests-ext in 6.609105ms" time="2025-11-05T04:41:10Z" level=info msg="Fetching info for olmv0-tests-ext" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for olmv0-tests-ext in 54.893251ms" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for machine-api-tests-ext in 468.164707ms" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for k8s-tests-ext in 604.974912ms" time="2025-11-05T04:41:10Z" level=info msg="Fetched info for openshift-tests in 687.680977ms" I1105 04:41:11.017370 1669 test_setup.go:125] Extended test version v4.1.0-10286-gc82b843 I1105 04:41:11.017438 1669 test_context.go:559] Tolerating taints "node-role.kubernetes.io/control-plane" when considering if nodes are ready I1105 04:41:11.028839 1669 framework.go:2334] microshift-version configmap not found openshift-tests version: v4.1.0-10286-gc82b843 time="2025-11-05T04:41:11Z" level=info msg="Using env RELEASE_IMAGE_LATEST for release image \"registry.build04.ci.openshift.org/ci-op-x0f88pwp/release@sha256:4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c\"" time="2025-11-05T04:41:11Z" level=info msg="Detected /run/secrets/ci.openshift.io/cluster-profile/pull-secret; using cluster profile for image access" time="2025-11-05T04:41:11Z" level=info msg="Cleaning up older cached data..." time="2025-11-05T04:41:11Z" level=info msg="Cleaned up old cached data in 28.265µs" time="2025-11-05T04:41:11Z" level=info msg="External binary cache is enabled" cache_dir=/tmp/home/.cache/openshift-tests time="2025-11-05T04:41:11Z" level=info msg="Using path for binaries /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/k8s-tests-ext for tag hyperkube" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/oauth-apiserver-tests-ext for tag oauth-apiserver" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/service-ca-operator-tests-ext for tag service-ca-operator" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/cluster-kube-controller-manager-operator-tests-ext for tag cluster-kube-controller-manager-operator" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/cluster-monitoring-operator-tests-ext for tag cluster-monitoring-operator" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/machine-api-tests-ext for tag machine-api-operator" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/machine-config-tests-ext for tag machine-config-operator" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/cluster-storage-operator-tests-ext for tag cluster-storage-operator" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/cluster-openshift-apiserver-operator-tests-ext for tag cluster-openshift-apiserver-operator" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/olmv0-tests-ext for tag operator-lifecycle-manager" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/cluster-kube-storage-version-migrator-operator-tests-ext for tag cluster-kube-storage-version-migrator-operator" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/cluster-kube-apiserver-operator-tests-ext for tag cluster-kube-apiserver-operator" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/control-plane-machine-set-tests-ext for tag cluster-control-plane-machine-set-operator" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/openshift-apiserver-tests-ext for tag openshift-apiserver" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/olmv1-tests-ext for tag olm-operator-controller" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/openshift-controller-manager-tests-ext for tag openshift-controller-manager" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/cluster-openshift-controller-manager-operator-tests-ext for tag cluster-openshift-controller-manager-operator" time="2025-11-05T04:41:11Z" level=info msg="Using existing binary /tmp/home/.cache/openshift-tests/registry_build04_ci_openshift_org_ci-op-x0f88pwp_release_sha256_4ff5f07a45834f09726c6bf3b8e16a0497f4efba4132f493459b5de9949e169c_d9b86a7efb8e/cluster-config-operator-tests-ext for tag cluster-config-operator" time="2025-11-05T04:41:11Z" level=info msg="Fetching info from 19 extension binaries" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for openshift-tests" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for service-ca-operator-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for cluster-openshift-apiserver-operator-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for machine-api-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for cluster-kube-controller-manager-operator-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for machine-config-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for cluster-storage-operator-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for k8s-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for cluster-monitoring-operator-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for oauth-apiserver-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for cluster-openshift-apiserver-operator-tests-ext in 9.566718ms" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for olmv0-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for cluster-monitoring-operator-tests-ext in 9.498842ms" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for cluster-kube-storage-version-migrator-operator-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for oauth-apiserver-tests-ext in 10.185859ms" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for cluster-kube-apiserver-operator-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for cluster-kube-controller-manager-operator-tests-ext in 10.465188ms" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for control-plane-machine-set-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for service-ca-operator-tests-ext in 10.676687ms" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for openshift-apiserver-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for cluster-kube-storage-version-migrator-operator-tests-ext in 9.332813ms" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for olmv1-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for cluster-kube-apiserver-operator-tests-ext in 10.750796ms" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for openshift-controller-manager-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for openshift-apiserver-tests-ext in 10.500517ms" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for cluster-openshift-controller-manager-operator-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for openshift-controller-manager-tests-ext in 9.755444ms" time="2025-11-05T04:41:11Z" level=info msg="Fetching info for cluster-config-operator-tests-ext" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for cluster-openshift-controller-manager-operator-tests-ext in 9.448851ms" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for cluster-storage-operator-tests-ext in 31.178816ms" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for cluster-config-operator-tests-ext in 8.59131ms" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for control-plane-machine-set-tests-ext in 46.273663ms" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for olmv0-tests-ext in 70.852557ms" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for olmv1-tests-ext in 91.383159ms" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for machine-config-tests-ext in 111.188381ms" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for machine-api-tests-ext in 475.896294ms" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for k8s-tests-ext in 614.146499ms" time="2025-11-05T04:41:11Z" level=info msg="Fetched info for openshift-tests in 726.086979ms" time="2025-11-05T04:41:11Z" level=info msg="Discovered 19 extensions" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:cluster-openshift-apiserver-operator found in cluster-openshift-apiserver-operator:cluster-openshift-apiserver-operator-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:cluster-monitoring-operator found in cluster-monitoring-operator:cluster-monitoring-operator-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:oauth-apiserver found in oauth-apiserver:oauth-apiserver-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:cluster-kube-controller-manager-operator found in cluster-kube-controller-manager-operator:cluster-kube-controller-manager-operator-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:service-ca-operator found in service-ca-operator:service-ca-operator-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:cluster-kube-storage-version-migrator-operator found in cluster-kube-storage-version-migrator-operator:cluster-kube-storage-version-migrator-operator-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:cluster-kube-apiserve-operator found in cluster-kube-apiserver-operator:cluster-kube-apiserver-operator-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:openshift-apiserver found in openshift-apiserver:openshift-apiserver-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:openshift-controller-manager found in openshift-controller-manager:openshift-controller-manager-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:cluster-openshift-controller-manager-operator found in cluster-openshift-controller-manager-operator:cluster-openshift-controller-manager-operator-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:cluster-storage-operator found in cluster-storage-operator:cluster-storage-operator-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:cluster-config-operator found in cluster-config-operator:cluster-config-operator-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:cluster-control-plane-machine-set-operator found in cluster-control-plane-machine-set-operator:control-plane-machine-set-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:olmv0 found in operator-lifecycle-manager:olmv0-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:olmv1 found in olm-operator-controller:olmv1-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:machine-config-operator found in machine-config-operator:machine-config-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:machine-api-operator found in machine-api-operator:machine-api-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:hyperkube found in hyperkube:k8s-tests-ext using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Extension openshift:payload:origin found in tests:openshift-tests using API version v1.1" time="2025-11-05T04:41:11Z" level=info msg="Determined all potential environment flags" api-group="[apiextensions.k8s.io coordination.k8s.io build.openshift.io security.openshift.io autoscaling.openshift.io k8s.ovn.org whereabouts.cni.cncf.io certificates.k8s.io metal3.io authorization.k8s.io flowcontrol.apiserver.k8s.io infrastructure.cluster.x-k8s.io network.operator.openshift.io config.openshift.io ipam.cluster.x-k8s.io machineconfiguration.openshift.io snapshot.storage.k8s.io console.openshift.io olm.operatorframework.io operator.openshift.io events.k8s.io packages.operators.coreos.com cloudcredential.openshift.io tuned.openshift.io batch apps.openshift.io cloud.network.openshift.io helm.openshift.io authentication.k8s.io scheduling.k8s.io resource.k8s.io k8s.cni.cncf.io monitoring.coreos.com rbac.authorization.k8s.io ingress.operator.openshift.io machine.openshift.io samples.operator.openshift.io user.openshift.io gateway.networking.k8s.io security.internal.openshift.io admissionregistration.k8s.io discovery.k8s.io controlplane.operator.openshift.io migration.k8s.io quota.openshift.io autoscaling storage.k8s.io node.k8s.io authorization.openshift.io monitoring.openshift.io operators.coreos.com apiregistration.k8s.io image.openshift.io template.openshift.io apiserver.openshift.io route.openshift.io populator.storage.k8s.io oauth.openshift.io project.openshift.io apps policy performance.openshift.io metrics.k8s.io networking.k8s.io imageregistry.operator.openshift.io policy.networking.k8s.io]" architecture="[amd64]" external-connectivity="[Direct]" feature-gate="[ManagedBootImagesvSphere ServiceAccountTokenNodeBinding ConsistentListFromCache RecoverVolumeExpansionFailure SchedulerPopFromBackoffQ ComponentSLIs NetworkDiagnosticsConfig CRDValidationRatcheting ListFromCacheSnapshot LoadBalancerIPMode NodeInclusionPolicyInPodTopologySpread NodeLogQuery PodLifecycleSleepAction ContainerCheckpoint CustomResourceFieldSelectors JobManagedBy PodObservedGenerationTracking RecursiveReadOnlyMounts RetryGenerateName SELinuxMountReadWriteOncePod SupplementalGroupsPolicy CSIMigrationPortworx PreferSameTrafficDistribution UnauthenticatedHTTP2DOSMitigation ExecProbeTimeout HonorPVReclaimPolicy ServiceAccountNodeAudienceRestriction KMSv1 CPUManagerPolicyBetaOptions DRASchedulerFilterTimeout KubeletSeparateDiskGC TopologyManagerPolicyOptions BuildCSIVolumes UserNamespacesPodSecurityStandards OpenShiftPodSecurityAdmission KubeletPodResourcesDynamicResources CPMSMachineNamePrefix VSphereMultiDisk APIServerIdentity WindowsGracefulNodeShutdown AggregatedDiscoveryRemoveBetaType KubeletPodResourcesListUseActivePods MatchLabelKeysInPodTopologySpreadSelectorMerge ImageMaximumGCAge MatchLabelKeysInPodTopologySpread OrderedNamespaceDeletion HighlyAvailableArbiter PreconfiguredUDNAddresses StoragePerformantSecurityPolicy DisableNodeKubeProxyVersion GatewayAPI VolumeAttributesClass AllowParsingUserUIDFromCertAuth AuthorizeWithSelectors KubeletPodResourcesGet RelaxedDNSSearchValidation StructuredAuthorizationConfiguration GCPClusterHostedDNSInstall RouteAdvertisements MultiCIDRServiceAllocator StatefulSetAutoDeletePVC TokenRequestServiceAccountUIDValidation NetworkSegmentation PreventStaticPodAPIReferences RelaxedEnvironmentVariableValidation StrictCostEnforcementForWebhooks WinDSR DRAResourceClaimDeviceStatus GracefulNodeShutdownBasedOnPodPriority KubeletFineGrainedAuthz LoggingBetaOptions CPUManagerPolicyOptions InOrderInformers LogarithmicScaleDown MemoryManager SeparateTaintEvictionController ServiceAccountTokenJTI StorageNamespaceIndex StreamingCollectionEncodingToProtobuf CronJobsScheduledAnnotation DRAAdminAccess PodLevelResources StructuredAuthenticationConfiguration TopologyAwareHints AzureWorkloadIdentity PinnedImages UserNamespacesSupport BtreeWatchCache JobBackoffLimitPerIndex SchedulerAsyncPreemption WatchList AdditionalRoutingCapabilities ManagedBootImages ContextualLogging PodReadyToStartContainersCondition SchedulerQueueingHints StrictCostEnforcementForVAP WinOverlay GatewayAPIController NewOLMWebhookProviderOpenshiftServiceCA UpgradeStatus GracefulNodeShutdown StreamingCollectionEncodingToJSON ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings PodDeletionCost SELinuxChangePolicy MetricsCollectionProfiles SigstoreImageVerification KubeletTracing PortForwardWebsockets RotateKubeletServerCertificate SchedulerAsyncAPICalls NewOLM APIResponseCompression AnyVolumeDataSource AuthorizeNodeWithSelectors DisableCPUQuotaWithExclusiveCPUs ReloadKubeletServerCertificateFile ServiceAccountTokenPodNodeInfo StructuredAuthenticationConfigurationEgressSelector MachineConfigNodes KubeletPSI MatchLabelKeysInPodAffinity SystemdWatchdog ConsolePluginContentSecurityPolicy RouteExternalCertificate KubeletServiceAccountTokenForCredentialProviders PodLifecycleSleepActionAllowZero ProbeHostPodSecurityStandards NodeSwap AdminNetworkPolicy ProcMountType DeclarativeValidation DisableAllocatorDualWrite RemoteRequestHeaderUID ServiceAccountTokenNodeBindingValidation SizeMemoryBackedVolumes ManagedBootImagesAzure NetworkLiveMigration AnonymousAuthConfigurableEndpoints ExternalServiceAccountTokenSigner NFTablesProxyMode SidecarContainers ManagedBootImagesAWS InPlacePodVerticalScaling JobPodReplacementPolicy PodIndexLabel ResilientWatchCacheInitialization SizeBasedListCostEstimate AlibabaPlatform ImageVolume VSphereMultiNetworks APIServerTracing PodSchedulingReadiness DRAPrioritizedList JobSuccessPolicy KubeletCgroupDriverFromCRI TopologyManagerPolicyBetaOptions DetectCacheInconsistency OpenAPIEnums ServiceTrafficDistribution StorageVersionHash]" network="[OVNKubernetes]" network-stack="[ipv4]" optional-capability="[Build CSISnapshot CloudControllerManager CloudCredential Console DeploymentConfig ImageRegistry Ingress Insights MachineAPI NodeTuning OperatorLifecycleManager OperatorLifecycleManagerV1 Storage baremetal marketplace openshift-samples]" platform="[gce]" topology="[HighlyAvailable]" upgrade="[None]" version="[4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest]" time="2025-11-05T04:41:11Z" level=info msg="Listing tests" binary=openshift-tests time="2025-11-05T04:41:11Z" level=info msg="OTE API version is: v1.1" binary=openshift-tests time="2025-11-05T04:41:11Z" level=info msg="Listing tests" binary=k8s-tests-ext time="2025-11-05T04:41:11Z" level=info msg="OTE API version is: v1.1" binary=k8s-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listing tests" binary=cluster-openshift-apiserver-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="OTE API version is: v1.1" binary=cluster-openshift-apiserver-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listing tests" binary=service-ca-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="OTE API version is: v1.1" binary=service-ca-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listing tests" binary=machine-api-tests-ext time="2025-11-05T04:41:11Z" level=info msg="OTE API version is: v1.1" binary=machine-api-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listing tests" binary=cluster-monitoring-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="OTE API version is: v1.1" binary=cluster-monitoring-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listing tests" binary=cluster-kube-controller-manager-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="OTE API version is: v1.1" binary=cluster-kube-controller-manager-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=service-ca-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listing tests" binary=oauth-apiserver-tests-ext time="2025-11-05T04:41:11Z" level=info msg="OTE API version is: v1.1" binary=oauth-apiserver-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=cluster-kube-controller-manager-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=cluster-openshift-apiserver-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listing tests" binary=machine-config-tests-ext time="2025-11-05T04:41:11Z" level=info msg="OTE API version is: v1.1" binary=machine-config-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listing tests" binary=cluster-storage-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="OTE API version is: v1.1" binary=cluster-storage-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=machine-config-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=cluster-storage-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=cluster-monitoring-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=openshift-tests time="2025-11-05T04:41:11Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=k8s-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=machine-api-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=oauth-apiserver-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listed 1 tests in 11.30352ms" binary=cluster-openshift-apiserver-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listing tests" binary=olmv0-tests-ext time="2025-11-05T04:41:11Z" level=info msg="OTE API version is: v1.1" binary=olmv0-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listed 1 tests in 11.36676ms" binary=cluster-kube-controller-manager-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listing tests" binary=cluster-kube-storage-version-migrator-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="OTE API version is: v1.1" binary=cluster-kube-storage-version-migrator-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=olmv0-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=cluster-kube-storage-version-migrator-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listed 1 tests in 12.168063ms" binary=cluster-monitoring-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listing tests" binary=cluster-kube-apiserver-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="OTE API version is: v1.1" binary=cluster-kube-apiserver-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=cluster-kube-apiserver-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listed 1 tests in 13.204848ms" binary=oauth-apiserver-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listing tests" binary=control-plane-machine-set-tests-ext time="2025-11-05T04:41:11Z" level=info msg="OTE API version is: v1.1" binary=control-plane-machine-set-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=control-plane-machine-set-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listed 1 tests in 14.826963ms" binary=service-ca-operator-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Listing tests" binary=openshift-apiserver-tests-ext time="2025-11-05T04:41:11Z" level=info msg="OTE API version is: v1.1" binary=openshift-apiserver-tests-ext time="2025-11-05T04:41:11Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=openshift-apiserver-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listed 1 tests in 11.676414ms" binary=cluster-kube-storage-version-migrator-operator-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listing tests" binary=olmv1-tests-ext time="2025-11-05T04:41:12Z" level=info msg="OTE API version is: v1.1" binary=olmv1-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listed 1 tests in 11.043518ms" binary=cluster-kube-apiserver-operator-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listing tests" binary=openshift-controller-manager-tests-ext time="2025-11-05T04:41:12Z" level=info msg="OTE API version is: v1.1" binary=openshift-controller-manager-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=olmv1-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=openshift-controller-manager-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listed 1 tests in 10.902975ms" binary=openshift-apiserver-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listing tests" binary=cluster-openshift-controller-manager-operator-tests-ext time="2025-11-05T04:41:12Z" level=info msg="OTE API version is: v1.1" binary=cluster-openshift-controller-manager-operator-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=cluster-openshift-controller-manager-operator-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listed 1 tests in 12.324664ms" binary=openshift-controller-manager-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listing tests" binary=cluster-config-operator-tests-ext time="2025-11-05T04:41:12Z" level=info msg="OTE API version is: v1.1" binary=cluster-config-operator-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Adding the following applicable flags to the list command: --network=OVNKubernetes --network-stack=ipv4 --external-connectivity=Direct --platform=gce --api-group=apiextensions.k8s.io --api-group=coordination.k8s.io --api-group=build.openshift.io --api-group=security.openshift.io --api-group=autoscaling.openshift.io --api-group=k8s.ovn.org --api-group=whereabouts.cni.cncf.io --api-group=certificates.k8s.io --api-group=metal3.io --api-group=authorization.k8s.io --api-group=flowcontrol.apiserver.k8s.io --api-group=infrastructure.cluster.x-k8s.io --api-group=network.operator.openshift.io --api-group=config.openshift.io --api-group=ipam.cluster.x-k8s.io --api-group=machineconfiguration.openshift.io --api-group=snapshot.storage.k8s.io --api-group=console.openshift.io --api-group=olm.operatorframework.io --api-group=operator.openshift.io --api-group=events.k8s.io --api-group=packages.operators.coreos.com --api-group=cloudcredential.openshift.io --api-group=tuned.openshift.io --api-group=batch --api-group=apps.openshift.io --api-group=cloud.network.openshift.io --api-group=helm.openshift.io --api-group=authentication.k8s.io --api-group=scheduling.k8s.io --api-group=resource.k8s.io --api-group=k8s.cni.cncf.io --api-group=monitoring.coreos.com --api-group=rbac.authorization.k8s.io --api-group=ingress.operator.openshift.io --api-group=machine.openshift.io --api-group=samples.operator.openshift.io --api-group=user.openshift.io --api-group=gateway.networking.k8s.io --api-group=security.internal.openshift.io --api-group=admissionregistration.k8s.io --api-group=discovery.k8s.io --api-group=controlplane.operator.openshift.io --api-group=migration.k8s.io --api-group=quota.openshift.io --api-group=autoscaling --api-group=storage.k8s.io --api-group=node.k8s.io --api-group=authorization.openshift.io --api-group=monitoring.openshift.io --api-group=operators.coreos.com --api-group=apiregistration.k8s.io --api-group=image.openshift.io --api-group=template.openshift.io --api-group=apiserver.openshift.io --api-group=route.openshift.io --api-group=populator.storage.k8s.io --api-group=oauth.openshift.io --api-group=project.openshift.io --api-group=apps --api-group=policy --api-group=performance.openshift.io --api-group=metrics.k8s.io --api-group=networking.k8s.io --api-group=imageregistry.operator.openshift.io --api-group=policy.networking.k8s.io --feature-gate=ManagedBootImagesvSphere --feature-gate=ServiceAccountTokenNodeBinding --feature-gate=ConsistentListFromCache --feature-gate=RecoverVolumeExpansionFailure --feature-gate=SchedulerPopFromBackoffQ --feature-gate=ComponentSLIs --feature-gate=NetworkDiagnosticsConfig --feature-gate=CRDValidationRatcheting --feature-gate=ListFromCacheSnapshot --feature-gate=LoadBalancerIPMode --feature-gate=NodeInclusionPolicyInPodTopologySpread --feature-gate=NodeLogQuery --feature-gate=PodLifecycleSleepAction --feature-gate=ContainerCheckpoint --feature-gate=CustomResourceFieldSelectors --feature-gate=JobManagedBy --feature-gate=PodObservedGenerationTracking --feature-gate=RecursiveReadOnlyMounts --feature-gate=RetryGenerateName --feature-gate=SELinuxMountReadWriteOncePod --feature-gate=SupplementalGroupsPolicy --feature-gate=CSIMigrationPortworx --feature-gate=PreferSameTrafficDistribution --feature-gate=UnauthenticatedHTTP2DOSMitigation --feature-gate=ExecProbeTimeout --feature-gate=HonorPVReclaimPolicy --feature-gate=ServiceAccountNodeAudienceRestriction --feature-gate=KMSv1 --feature-gate=CPUManagerPolicyBetaOptions --feature-gate=DRASchedulerFilterTimeout --feature-gate=KubeletSeparateDiskGC --feature-gate=TopologyManagerPolicyOptions --feature-gate=BuildCSIVolumes --feature-gate=UserNamespacesPodSecurityStandards --feature-gate=OpenShiftPodSecurityAdmission --feature-gate=KubeletPodResourcesDynamicResources --feature-gate=CPMSMachineNamePrefix --feature-gate=VSphereMultiDisk --feature-gate=APIServerIdentity --feature-gate=WindowsGracefulNodeShutdown --feature-gate=AggregatedDiscoveryRemoveBetaType --feature-gate=KubeletPodResourcesListUseActivePods --feature-gate=MatchLabelKeysInPodTopologySpreadSelectorMerge --feature-gate=ImageMaximumGCAge --feature-gate=MatchLabelKeysInPodTopologySpread --feature-gate=OrderedNamespaceDeletion --feature-gate=HighlyAvailableArbiter --feature-gate=PreconfiguredUDNAddresses --feature-gate=StoragePerformantSecurityPolicy --feature-gate=DisableNodeKubeProxyVersion --feature-gate=GatewayAPI --feature-gate=VolumeAttributesClass --feature-gate=AllowParsingUserUIDFromCertAuth --feature-gate=AuthorizeWithSelectors --feature-gate=KubeletPodResourcesGet --feature-gate=RelaxedDNSSearchValidation --feature-gate=StructuredAuthorizationConfiguration --feature-gate=GCPClusterHostedDNSInstall --feature-gate=RouteAdvertisements --feature-gate=MultiCIDRServiceAllocator --feature-gate=StatefulSetAutoDeletePVC --feature-gate=TokenRequestServiceAccountUIDValidation --feature-gate=NetworkSegmentation --feature-gate=PreventStaticPodAPIReferences --feature-gate=RelaxedEnvironmentVariableValidation --feature-gate=StrictCostEnforcementForWebhooks --feature-gate=WinDSR --feature-gate=DRAResourceClaimDeviceStatus --feature-gate=GracefulNodeShutdownBasedOnPodPriority --feature-gate=KubeletFineGrainedAuthz --feature-gate=LoggingBetaOptions --feature-gate=CPUManagerPolicyOptions --feature-gate=InOrderInformers --feature-gate=LogarithmicScaleDown --feature-gate=MemoryManager --feature-gate=SeparateTaintEvictionController --feature-gate=ServiceAccountTokenJTI --feature-gate=StorageNamespaceIndex --feature-gate=StreamingCollectionEncodingToProtobuf --feature-gate=CronJobsScheduledAnnotation --feature-gate=DRAAdminAccess --feature-gate=PodLevelResources --feature-gate=StructuredAuthenticationConfiguration --feature-gate=TopologyAwareHints --feature-gate=AzureWorkloadIdentity --feature-gate=PinnedImages --feature-gate=UserNamespacesSupport --feature-gate=BtreeWatchCache --feature-gate=JobBackoffLimitPerIndex --feature-gate=SchedulerAsyncPreemption --feature-gate=WatchList --feature-gate=AdditionalRoutingCapabilities --feature-gate=ManagedBootImages --feature-gate=ContextualLogging --feature-gate=PodReadyToStartContainersCondition --feature-gate=SchedulerQueueingHints --feature-gate=StrictCostEnforcementForVAP --feature-gate=WinOverlay --feature-gate=GatewayAPIController --feature-gate=NewOLMWebhookProviderOpenshiftServiceCA --feature-gate=UpgradeStatus --feature-gate=GracefulNodeShutdown --feature-gate=StreamingCollectionEncodingToJSON --feature-gate=ExternalOIDC --feature-gate=ExternalOIDCWithUIDAndExtraClaimMappings --feature-gate=PodDeletionCost --feature-gate=SELinuxChangePolicy --feature-gate=MetricsCollectionProfiles --feature-gate=SigstoreImageVerification --feature-gate=KubeletTracing --feature-gate=PortForwardWebsockets --feature-gate=RotateKubeletServerCertificate --feature-gate=SchedulerAsyncAPICalls --feature-gate=NewOLM --feature-gate=APIResponseCompression --feature-gate=AnyVolumeDataSource --feature-gate=AuthorizeNodeWithSelectors --feature-gate=DisableCPUQuotaWithExclusiveCPUs --feature-gate=ReloadKubeletServerCertificateFile --feature-gate=ServiceAccountTokenPodNodeInfo --feature-gate=StructuredAuthenticationConfigurationEgressSelector --feature-gate=MachineConfigNodes --feature-gate=KubeletPSI --feature-gate=MatchLabelKeysInPodAffinity --feature-gate=SystemdWatchdog --feature-gate=ConsolePluginContentSecurityPolicy --feature-gate=RouteExternalCertificate --feature-gate=KubeletServiceAccountTokenForCredentialProviders --feature-gate=PodLifecycleSleepActionAllowZero --feature-gate=ProbeHostPodSecurityStandards --feature-gate=NodeSwap --feature-gate=AdminNetworkPolicy --feature-gate=ProcMountType --feature-gate=DeclarativeValidation --feature-gate=DisableAllocatorDualWrite --feature-gate=RemoteRequestHeaderUID --feature-gate=ServiceAccountTokenNodeBindingValidation --feature-gate=SizeMemoryBackedVolumes --feature-gate=ManagedBootImagesAzure --feature-gate=NetworkLiveMigration --feature-gate=AnonymousAuthConfigurableEndpoints --feature-gate=ExternalServiceAccountTokenSigner --feature-gate=NFTablesProxyMode --feature-gate=SidecarContainers --feature-gate=ManagedBootImagesAWS --feature-gate=InPlacePodVerticalScaling --feature-gate=JobPodReplacementPolicy --feature-gate=PodIndexLabel --feature-gate=ResilientWatchCacheInitialization --feature-gate=SizeBasedListCostEstimate --feature-gate=AlibabaPlatform --feature-gate=ImageVolume --feature-gate=VSphereMultiNetworks --feature-gate=APIServerTracing --feature-gate=PodSchedulingReadiness --feature-gate=DRAPrioritizedList --feature-gate=JobSuccessPolicy --feature-gate=KubeletCgroupDriverFromCRI --feature-gate=TopologyManagerPolicyBetaOptions --feature-gate=DetectCacheInconsistency --feature-gate=OpenAPIEnums --feature-gate=ServiceTrafficDistribution --feature-gate=StorageVersionHash --upgrade=None --architecture=amd64 --optional-capability=Build --optional-capability=CSISnapshot --optional-capability=CloudControllerManager --optional-capability=CloudCredential --optional-capability=Console --optional-capability=DeploymentConfig --optional-capability=ImageRegistry --optional-capability=Ingress --optional-capability=Insights --optional-capability=MachineAPI --optional-capability=NodeTuning --optional-capability=OperatorLifecycleManager --optional-capability=OperatorLifecycleManagerV1 --optional-capability=Storage --optional-capability=baremetal --optional-capability=marketplace --optional-capability=openshift-samples --topology=HighlyAvailable --version=4.21.0-0.ci-2025-11-05-034259-test-ci-op-x0f88pwp-latest" binary=cluster-config-operator-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listed 1 tests in 10.330109ms" binary=cluster-openshift-controller-manager-operator-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listed 6 tests in 39.396797ms" binary=cluster-storage-operator-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listed 1 tests in 9.479917ms" binary=cluster-config-operator-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listed 22 tests in 43.608648ms" binary=control-plane-machine-set-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listed 31 tests in 88.882816ms" binary=olmv0-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listed 47 tests in 88.539686ms" binary=olmv1-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listed 17 tests in 112.352332ms" binary=machine-config-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listed 0 tests in 568.18585ms" binary=machine-api-tests-ext time="2025-11-05T04:41:12Z" level=info msg="Listed 1027 tests in 760.648568ms" binary=openshift-tests time="2025-11-05T04:41:13Z" level=info msg="Listed 6280 tests in 1.029444819s" binary=k8s-tests-ext time="2025-11-05T04:41:13Z" level=info msg="Discovered 7441 total tests" time="2025-11-05T04:41:13Z" level=info msg="Generated skips for cluster state" skips="[[Skipped:gce] [Skipped:Network/OVNKubernetes] [Feature:Networking-IPv6] [Feature:IPv6DualStack [Feature:SCTPConnectivity] [Requires:HypervisorSSHConfig]]" time="2025-11-05T04:41:13Z" level=info msg="Applying filter: suite-qualifiers" before=7441 component=test-filter filter=suite-qualifiers time="2025-11-05T04:41:16Z" level=info msg="Filter suite-qualifiers completed - removed 7357 tests" after=84 before=7441 component=test-filter filter=suite-qualifiers removed=7357 time="2025-11-05T04:41:16Z" level=info msg="Applying filter: kube-rebase-tests" before=84 component=test-filter filter=kube-rebase-tests time="2025-11-05T04:41:16Z" level=info msg="Filter kube-rebase-tests completed - removed 0 tests" after=84 before=84 component=test-filter filter=kube-rebase-tests removed=0 time="2025-11-05T04:41:16Z" level=info msg="Applying filter: disabled-tests" before=84 component=test-filter filter=disabled-tests time="2025-11-05T04:41:16Z" level=info msg="Filter disabled-tests completed - removed 0 tests" after=84 before=84 component=test-filter filter=disabled-tests removed=0 time="2025-11-05T04:41:16Z" level=info msg="Applying filter: cluster-state" before=84 component=test-filter filter=cluster-state time="2025-11-05T04:41:16Z" level=info msg="Filter cluster-state completed - removed 0 tests" after=84 before=84 component=test-filter filter=cluster-state removed=0 time="2025-11-05T04:41:16Z" level=info msg="Filter chain completed with 84 tests" component=test-filter final_count=84 time="2025-11-05T04:41:16Z" level=info msg="Suite defined parallelism 0" time="2025-11-05T04:41:16Z" level=info msg="Found 3 worker nodes" time="2025-11-05T04:41:16Z" level=info msg="Found 6 nodes" time="2025-11-05T04:41:16Z" level=info msg="Total nodes: 6, Worker nodes: 3, Parallelism: 10" time="2025-11-05T04:41:16Z" level=info msg="Waiting for all cluster operators to become stable" I1105 04:41:16.527365 1669 framework.go:2334] microshift-version configmap not found time="2025-11-05T04:44:16Z" level=info msg=" Preparing pod-lifecycle for Node / Kubelet" time="2025-11-05T04:44:16Z" level=info msg=" Preparing e2e-test-analyzer for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Preparing high-cpu-metric-collector for Node / Kubelet" time="2025-11-05T04:44:16Z" level=info msg=" Preparing legacy-storage-invariants for Storage" time="2025-11-05T04:44:16Z" level=info msg=" Preparing high-cpu-test-analyzer for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Preparing required-scc-annotation-checker for Cluster Version Operator" time="2025-11-05T04:44:16Z" level=info msg=" Preparing etcd-log-analyzer for etcd" time="2025-11-05T04:44:16Z" level=info msg=" Preparing node-lifecycle for Node / Kubelet" time="2025-11-05T04:44:16Z" level=info msg=" Preparing machine-lifecycle for Cluster-Lifecycle / machine-api" time="2025-11-05T04:44:16Z" level=info msg=" Preparing operator-state-analyzer for Cluster Version Operator" time="2025-11-05T04:44:16Z" level=info msg=" Preparing legacy-networking-invariants for Networking / cluster-network-operator" time="2025-11-05T04:44:16Z" level=info msg=" Preparing termination-message-policy for Cluster Version Operator" time="2025-11-05T04:44:16Z" level=info msg=" Preparing graceful-shutdown-analyzer for kube-apiserver" time="2025-11-05T04:44:16Z" level=info msg=" Preparing known-image-checker for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Preparing interval-serializer for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Preparing legacy-etcd-invariants for etcd" time="2025-11-05T04:44:16Z" level=info msg=" Preparing tracked-resources-serializer for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Preparing initial-and-final-operator-log-scraper for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Preparing watch-namespaces for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Preparing cluster-version-checker for Cluster Version Operator" time="2025-11-05T04:44:16Z" level=info msg="Starting PodsLogStreamer" component=PodsStreamer I1105 04:44:16.562637 1669 framework.go:2334] microshift-version configmap not found time="2025-11-05T04:44:16Z" level=info msg=" Preparing kubelet-log-collector for Node / Kubelet" time="2025-11-05T04:44:16Z" level=info msg=" Preparing additional-events-collector for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Preparing etcd-disk-metrics-intervals for etcd" time="2025-11-05T04:44:16Z" level=info msg=" Preparing clusteroperator-collector for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Preparing generation-analyzer for kube-apiserver" time="2025-11-05T04:44:16Z" level=info msg=" Preparing legacy-test-framework-invariants-alerts for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Preparing cluster-info-serializer for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Preparing legacy-authentication-invariants for apiserver-auth" time="2025-11-05T04:44:16Z" level=info msg=" Preparing lease-checker for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Preparing timeline-serializer for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Preparing oc-adm-upgrade-status for oc / update" I1105 04:44:16.578302 1669 framework.go:2334] microshift-version configmap not found time="2025-11-05T04:44:16Z" level=info msg=" Preparing event-collector for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Preparing legacy-kube-apiserver-invariants for kube-apiserver" time="2025-11-05T04:44:16Z" level=info msg=" Preparing legacy-cvo-invariants for Cluster Version Operator" time="2025-11-05T04:44:16Z" level=info msg=" Preparing node-state-analyzer for Node / Kubelet" time="2025-11-05T04:44:16Z" level=info msg=" Preparing azure-metrics-collector for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Preparing audit-log-analyzer for kube-apiserver" time="2025-11-05T04:44:16Z" level=info msg=" Preparing legacy-node-invariants for Node / Kubelet" time="2025-11-05T04:44:16Z" level=info msg=" Starting event-collector for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Starting legacy-storage-invariants for Storage" time="2025-11-05T04:44:16Z" level=info msg=" Starting audit-log-analyzer for kube-apiserver" time="2025-11-05T04:44:16Z" level=info msg=" Starting pod-lifecycle for Node / Kubelet" time="2025-11-05T04:44:16Z" level=info msg=" Starting legacy-cvo-invariants for Cluster Version Operator" time="2025-11-05T04:44:16Z" level=info msg=" Starting legacy-kube-apiserver-invariants for kube-apiserver" time="2025-11-05T04:44:16Z" level=info msg=" Starting node-state-analyzer for Node / Kubelet" time="2025-11-05T04:44:16Z" level=info msg=" Starting initial-and-final-operator-log-scraper for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Starting legacy-networking-invariants for Networking / cluster-network-operator" time="2025-11-05T04:44:16Z" level=info msg=" Starting high-cpu-metric-collector for Node / Kubelet" time="2025-11-05T04:44:16Z" level=info msg=" Starting cluster-version-checker for Cluster Version Operator" time="2025-11-05T04:44:16Z" level=info msg=" Starting machine-lifecycle for Cluster-Lifecycle / machine-api" time="2025-11-05T04:44:16Z" level=info msg=" Starting watch-namespaces for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Starting legacy-authentication-invariants for apiserver-auth" time="2025-11-05T04:44:16Z" level=info msg=" Starting high-cpu-test-analyzer for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Starting etcd-log-analyzer for etcd" time="2025-11-05T04:44:16Z" level=info msg=" Starting lease-checker for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Starting additional-events-collector for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Starting etcd-disk-metrics-intervals for etcd" time="2025-11-05T04:44:16Z" level=info msg=" Starting oc-adm-upgrade-status for oc / update" time="2025-11-05T04:44:16Z" level=info msg=" Starting interval-serializer for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Starting legacy-etcd-invariants for etcd" time="2025-11-05T04:44:16Z" level=info msg=" Starting kubelet-log-collector for Node / Kubelet" time="2025-11-05T04:44:16Z" level=info msg=" Starting termination-message-policy for Cluster Version Operator" time="2025-11-05T04:44:16Z" level=info msg=" Starting e2e-test-analyzer for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Starting legacy-test-framework-invariants-alerts for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Starting clusteroperator-collector for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Starting generation-analyzer for kube-apiserver" time="2025-11-05T04:44:16Z" level=info msg=" Starting azure-metrics-collector for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Starting cluster-info-serializer for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Starting known-image-checker for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Starting timeline-serializer for Test Framework" Starting SimultaneousPodIPController time="2025-11-05T04:44:16Z" level=info msg=" Starting legacy-node-invariants for Node / Kubelet" time="2025-11-05T04:44:16Z" level=info msg=" Starting graceful-shutdown-analyzer for kube-apiserver" time="2025-11-05T04:44:16Z" level=info msg=" Starting tracked-resources-serializer for Test Framework" time="2025-11-05T04:44:16Z" level=info msg=" Starting operator-state-analyzer for Cluster Version Operator" I1105 04:44:16.603813 1669 shared_informer.go:349] "Waiting for caches to sync" controller="SimultaneousPodIPController" I1105 04:44:16.603803 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T04:44:16Z" level=info msg=" Starting required-scc-annotation-checker for Cluster Version Operator" time="2025-11-05T04:44:16Z" level=info msg=" Starting node-lifecycle for Node / Kubelet" I1105 04:44:16.615160 1669 framework.go:2334] microshift-version configmap not found I1105 04:44:16.615178 1669 framework.go:2334] microshift-version configmap not found I1105 04:44:16.627150 1669 framework.go:2334] microshift-version configmap not found I1105 04:44:16.704259 1669 shared_informer.go:356] "Caches are synced" controller="SimultaneousPodIPController" All monitor tests started. time="2025-11-05T04:44:36Z" level=info msg="Found 29 early tests" time="2025-11-05T04:44:36Z" level=info msg="Found 0 late tests" time="2025-11-05T04:44:36Z" level=info msg="Determining sharding of 55 tests" shardCount=0 shardID=0 sharder=hash time="2025-11-05T04:44:36Z" level=warning msg="Sharding disabled, returning all tests" time="2025-11-05T04:44:36Z" level=info msg="Found 55 openshift tests" time="2025-11-05T04:44:36Z" level=info msg="Found 0 kube tests" time="2025-11-05T04:44:36Z" level=info msg="Found 0 storage tests" time="2025-11-05T04:44:36Z" level=info msg="Found 0 network k8s tests" time="2025-11-05T04:44:36Z" level=info msg="Found 0 HPA tests" time="2025-11-05T04:44:36Z" level=info msg="Found 0 network tests" time="2025-11-05T04:44:36Z" level=info msg="Found 0 builds tests" time="2025-11-05T04:44:36Z" level=info msg="Found 0 must-gather tests" started: 0/1/29 "[sig-ci] [Early] prow job name should match platform type [Suite:openshift/conformance/parallel]" started: 0/2/29 "[sig-ci] [Early] prow job name should match security mode [Suite:openshift/conformance/parallel]" started: 0/3/29 "[sig-cluster-lifecycle][Feature:Machines] Managed cluster should [sig-scheduling][Early] control plane machine set operator should not have any events [Suite:openshift/conformance/parallel]" started: 0/4/29 "[sig-scheduling][Early] The openshift-image-registry pods [apigroup:imageregistry.operator.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: 0/5/29 "[sig-cluster-lifecycle][Feature:Machines][Early] Managed cluster should have same number of Machines and Nodes [apigroup:machine.openshift.io] [Suite:openshift/conformance/parallel]" started: 0/6/29 "[sig-scheduling][Early] The openshift-authentication pods [apigroup:oauth.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: 0/7/29 "[sig-node] Managed cluster record the number of nodes at the beginning of the tests [Early] [Suite:openshift/conformance/parallel]" started: 0/8/29 "[sig-arch][Early] CRDs for openshift.io should have subresource.status [Suite:openshift/conformance/parallel]" started: 0/9/29 "[sig-arch][Early] APIs for openshift.io must have stable versions [Suite:openshift/conformance/parallel]" started: 0/10/29 "[sig-scheduling][Early] The openshift-apiserver pods [apigroup:authorization.openshift.io][apigroup:build.openshift.io][apigroup:image.openshift.io][apigroup:project.openshift.io][apigroup:quota.openshift.io][apigroup:route.openshift.io][apigroup:security.openshift.io][apigroup:template.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" passed: (1.2s) 2025-11-05T04:44:39 "[sig-cluster-lifecycle][Feature:Machines] Managed cluster should [sig-scheduling][Early] control plane machine set operator should not have any events [Suite:openshift/conformance/parallel]" started: 0/11/29 "[sig-scheduling][Early] The HAProxy router pods [apigroup:route.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" passed: (1.1s) 2025-11-05T04:44:39 "[sig-cluster-lifecycle][Feature:Machines][Early] Managed cluster should have same number of Machines and Nodes [apigroup:machine.openshift.io] [Suite:openshift/conformance/parallel]" started: 0/12/29 "[sig-cluster-lifecycle][Feature:Machines] Managed cluster should [sig-scheduling][Early] control plane machine set operator should not cause an early rollout [Suite:openshift/conformance/parallel]" passed: (1.4s) 2025-11-05T04:44:39 "[sig-ci] [Early] prow job name should match platform type [Suite:openshift/conformance/parallel]" started: 0/13/29 "[sig-etcd] etcd cluster has the same number of master nodes and voting members from the endpoints configmap [Early][apigroup:config.openshift.io] [Suite:openshift/conformance/parallel]" passed: (1.5s) 2025-11-05T04:44:39 "[sig-node] Managed cluster record the number of nodes at the beginning of the tests [Early] [Suite:openshift/conformance/parallel]" started: 0/14/29 "[sig-arch][Early] Operators low level operators should have at least the conditions we had in 4.17 [Suite:openshift/conformance/parallel]" passed: (1.6s) 2025-11-05T04:44:39 "[sig-scheduling][Early] The openshift-image-registry pods [apigroup:imageregistry.operator.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: 0/15/29 "[sig-kubevirt] migration when running openshift cluster on KubeVirt virtual machines and live migrate hosted control plane workers [Early] should maintain node readiness [Suite:openshift/conformance/parallel]" passed: (1.5s) 2025-11-05T04:44:40 "[sig-ci] [Early] prow job name should match security mode [Suite:openshift/conformance/parallel]" started: 0/16/29 "[sig-arch][Early] Managed cluster should [apigroup:config.openshift.io] start all core operators [Suite:openshift/conformance/parallel]" passed: (1.7s) 2025-11-05T04:44:40 "[sig-scheduling][Early] The openshift-apiserver pods [apigroup:authorization.openshift.io][apigroup:build.openshift.io][apigroup:image.openshift.io][apigroup:project.openshift.io][apigroup:quota.openshift.io][apigroup:route.openshift.io][apigroup:security.openshift.io][apigroup:template.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: 0/17/29 "[sig-scheduling][Early] The openshift-monitoring thanos-querier pods [apigroup:monitoring.coreos.com] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" passed: (1.8s) 2025-11-05T04:44:40 "[sig-arch][Early] APIs for openshift.io must have stable versions [Suite:openshift/conformance/parallel]" started: 0/18/29 "[sig-ci] [Early] prow job name should match network type [Suite:openshift/conformance/parallel]" passed: (2.4s) 2025-11-05T04:44:40 "[sig-scheduling][Early] The openshift-authentication pods [apigroup:oauth.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: 0/19/29 "[sig-scheduling][Early] The openshift-oauth-apiserver pods [apigroup:oauth.openshift.io][apigroup:user.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" passed: (100ms) 2025-11-05T04:44:40 "[sig-cluster-lifecycle][Feature:Machines] Managed cluster should [sig-scheduling][Early] control plane machine set operator should not cause an early rollout [Suite:openshift/conformance/parallel]" started: 0/20/29 "[sig-ci] [Early] prow job name should match feature set [Suite:openshift/conformance/parallel]" passed: (2.5s) 2025-11-05T04:44:40 "[sig-arch][Early] CRDs for openshift.io should have subresource.status [Suite:openshift/conformance/parallel]" started: 0/21/29 "[sig-ci] [Early] prow job name should match cluster version [apigroup:config.openshift.io] [Suite:openshift/conformance/parallel]" passed: (100ms) 2025-11-05T04:44:41 "[sig-etcd] etcd cluster has the same number of master nodes and voting members from the endpoints configmap [Early][apigroup:config.openshift.io] [Suite:openshift/conformance/parallel]" started: 0/22/29 "[sig-scheduling][Early] The openshift-monitoring prometheus-adapter pods [apigroup:monitoring.coreos.com] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" passed: (400ms) 2025-11-05T04:44:41 "[sig-arch][Early] Operators low level operators should have at least the conditions we had in 4.17 [Suite:openshift/conformance/parallel]" started: 0/23/29 "[sig-etcd] etcd record the start revision of the etcd-operator [Early] [Suite:openshift/conformance/parallel]" passed: (100ms) 2025-11-05T04:44:41 "[sig-arch][Early] Managed cluster should [apigroup:config.openshift.io] start all core operators [Suite:openshift/conformance/parallel]" started: 0/24/29 "[sig-scheduling][Early] The openshift-operator-lifecycle-manager pods [apigroup:packages.operators.coreos.com] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" passed: (1.5s) 2025-11-05T04:44:42 "[sig-scheduling][Early] The HAProxy router pods [apigroup:route.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: 0/25/29 "[sig-scheduling][Early] The openshift-etcd pods [apigroup:operator.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" skip [github.com/openshift/origin/test/extended/kubevirt/util.go:358]: Not running in KubeVirt cluster skipped: (1.4s) 2025-11-05T04:44:42 "[sig-kubevirt] migration when running openshift cluster on KubeVirt virtual machines and live migrate hosted control plane workers [Early] should maintain node readiness [Suite:openshift/conformance/parallel]" started: 0/26/29 "[sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]" passed: (0s) 2025-11-05T04:44:42 "[sig-etcd] etcd record the start revision of the etcd-operator [Early] [Suite:openshift/conformance/parallel]" started: 0/27/29 "[sig-scheduling][Early] The openshift-console console pods [apigroup:console.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" passed: (1.5s) 2025-11-05T04:44:42 "[sig-ci] [Early] prow job name should match network type [Suite:openshift/conformance/parallel]" started: 0/28/29 "[sig-instrumentation] Prometheus [apigroup:image.openshift.io] when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early][apigroup:config.openshift.io] [Suite:openshift/conformance/parallel]" skip [github.com/openshift/origin/test/extended/ci/job_names.go:139]: This is only expected to work on periodics, skipping skipped: (1.2s) 2025-11-05T04:44:43 "[sig-ci] [Early] prow job name should match cluster version [apigroup:config.openshift.io] [Suite:openshift/conformance/parallel]" started: 0/29/29 "[sig-scheduling][Early] The openshift-console downloads pods [apigroup:console.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" passed: (1.2s) 2025-11-05T04:44:43 "[sig-scheduling][Early] The openshift-oauth-apiserver pods [apigroup:oauth.openshift.io][apigroup:user.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" passed: (1.2s) 2025-11-05T04:44:43 "[sig-ci] [Early] prow job name should match feature set [Suite:openshift/conformance/parallel]" passed: (2.1s) 2025-11-05T04:44:43 "[sig-scheduling][Early] The openshift-monitoring thanos-querier pods [apigroup:monitoring.coreos.com] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" passed: (1.3s) 2025-11-05T04:44:43 "[sig-scheduling][Early] The openshift-monitoring prometheus-adapter pods [apigroup:monitoring.coreos.com] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" passed: (1.3s) 2025-11-05T04:44:43 "[sig-scheduling][Early] The openshift-operator-lifecycle-manager pods [apigroup:packages.operators.coreos.com] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" passed: (1.2s) 2025-11-05T04:44:44 "[sig-scheduling][Early] The openshift-etcd pods [apigroup:operator.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" passed: (800ms) 2025-11-05T04:44:44 "[sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]" passed: (1.4s) 2025-11-05T04:44:45 "[sig-scheduling][Early] The openshift-console console pods [apigroup:console.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" passed: (1.5s) 2025-11-05T04:44:45 "[sig-instrumentation] Prometheus [apigroup:image.openshift.io] when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early][apigroup:config.openshift.io] [Suite:openshift/conformance/parallel]" passed: (1.3s) 2025-11-05T04:44:45 "[sig-scheduling][Early] The openshift-console downloads pods [apigroup:console.openshift.io] should be scheduled on different nodes [Suite:openshift/conformance/parallel]" started: 0/1/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and the provider spec is changed should perform a rolling update" started: 0/2/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and a defaulted value is deleted from the ControlPlaneMachineSet should have the control plane machine set not cause a rollout" started: 0/3/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 2 is not as expected and again MachineNamePrefix is reset should replace the outdated machine when deleted" started: 0/4/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 2 is not as expected should replace the outdated machine when deleted" started: 0/5/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected and again MachineNamePrefix is reset should rolling update replace the outdated machine" started: 0/6/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected and again MachineNamePrefix is reset should not replace the outdated machine" started: 0/7/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and the ControlPlaneMachineSet is up to date and the ControlPlaneMachineSet is deleted and the ControlPlaneMachineSet is reactivated should have the control plane machine set not cause a rollout" started: 0/8/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected should not replace the outdated machine" started: 0/9/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected should rolling update replace the outdated machine" started: 0/10/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and the provider spec of index 2 is not as expected should not replace the outdated machine" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:46.389 STEP: Checking the control plane machine set exists @ 11/05/25 04:44:46.416 STEP: Checking the control plane machine set is active @ 11/05/25 04:44:46.428 STEP: Updating the provider spec of the control plane machine at index 2 @ 11/05/25 04:44:46.463 [PANICKED] in [It] - /usr/lib/golang/src/runtime/panic.go:262 @ 11/05/25 04:44:46.54 STEP: Updating the provider spec of the control plane machine at index 2 @ 11/05/25 04:44:46.541 github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e.init.func2.2.4.3.ItShouldNotOnDeleteReplaceTheOutdatedMachine.3() /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/cases.go:188 +0x2f fail [runtime/panic.go:262]: Test Panicked: runtime error: invalid memory address or nil pointer dereference fail [runtime/panic.go:262]: Test Panicked failed: (300ms) 2025-11-05T04:44:46 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and the provider spec of index 2 is not as expected should not replace the outdated machine" started: 1/11/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and the ControlPlaneMachineSet is up to date and the ControlPlaneMachineSet is deleted and the ControlPlaneMachineSet is reactivated should find all control plane machines to have owner references set" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:46.382 STEP: Checking the control plane machine set exists @ 11/05/25 04:44:46.422 STEP: Checking the control plane machine set is active @ 11/05/25 04:44:46.432 STEP: Updating the machine name prefix of the control plane machine set to "master-prefix" @ 11/05/25 04:44:46.483 STEP: Updating the provider spec of the control plane machine at index 1 @ 11/05/25 04:44:46.834 [PANICKED] in [It] - /usr/lib/golang/src/runtime/panic.go:262 @ 11/05/25 04:44:46.933 github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e.init.func1.2.3.2.ItShouldRollingUpdateReplaceTheOutdatedMachine.3() /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/cases.go:130 +0x3e fail [runtime/panic.go:262]: Test Panicked: runtime error: invalid memory address or nil pointer dereference fail [runtime/panic.go:262]: Test Panicked failed: (600ms) 2025-11-05T04:44:46 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected should rolling update replace the outdated machine" started: 2/12/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and the provider spec of index 2 is not as expected should replace the outdated machine when deleted" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:46.37 STEP: Checking the control plane machine set exists @ 11/05/25 04:44:46.426 STEP: Checking the control plane machine set is active @ 11/05/25 04:44:46.46 STEP: Updating the machine name prefix of the control plane machine set to "master-prefix-on-delete" @ 11/05/25 04:44:46.543 STEP: Updating the provider spec of the control plane machine at index 2 @ 11/05/25 04:44:46.706 [PANICKED] in [It] - /usr/lib/golang/src/runtime/panic.go:262 @ 11/05/25 04:44:46.774 STEP: Updating the provider spec of the control plane machine at index 2 @ 11/05/25 04:44:46.774 github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e.init.func1.2.4.3.2.ItShouldOnDeleteReplaceTheOutDatedMachineWhenDeleted.5() /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/cases.go:216 +0x3e fail [runtime/panic.go:262]: Test Panicked: runtime error: invalid memory address or nil pointer dereference fail [runtime/panic.go:262]: Test Panicked failed: (600ms) 2025-11-05T04:44:46 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 2 is not as expected should replace the outdated machine when deleted" started: 3/13/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected and again MachineNamePrefix is reset should rolling update replace the outdated machine" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:46.403 STEP: Checking the control plane machine set exists @ 11/05/25 04:44:46.447 STEP: Checking the control plane machine set is active @ 11/05/25 04:44:46.458 STEP: Updating the machine name prefix of the control plane machine set to "master-prefix-on-delete" @ 11/05/25 04:44:46.523 STEP: Updating the provider spec of the control plane machine at index 2 @ 11/05/25 04:44:46.795 STEP: Un-setting the machine name prefix of the control plane machine set @ 11/05/25 04:44:46.829 STEP: Updating the provider spec of the control plane machine at index 2 @ 11/05/25 04:44:47.006 [PANICKED] in [It] - /usr/lib/golang/src/runtime/panic.go:262 @ 11/05/25 04:44:47.04 STEP: Updating the provider spec of the control plane machine at index 2 @ 11/05/25 04:44:47.04 github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e.init.func1.2.4.3.2.3.ItShouldOnDeleteReplaceTheOutDatedMachineWhenDeleted.3() /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/cases.go:216 +0x3e fail [runtime/panic.go:262]: Test Panicked: runtime error: invalid memory address or nil pointer dereference fail [runtime/panic.go:262]: Test Panicked failed: (700ms) 2025-11-05T04:44:47 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 2 is not as expected and again MachineNamePrefix is reset should replace the outdated machine when deleted" started: 4/14/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected should replace the outdated machine when deleted" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:46.262 STEP: Checking the control plane machine set exists @ 11/05/25 04:44:46.297 STEP: Checking the control plane machine set is active @ 11/05/25 04:44:46.311 STEP: Modifying the control plane machine set provider spec @ 11/05/25 04:44:46.343 [FAILED] in [It] - /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/cases.go:66 @ 11/05/25 04:44:47.357 fail [github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/cases.go:66]: testFramework is required Expected : nil not to be nil failed: (1.1s) 2025-11-05T04:44:47 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and the provider spec is changed should perform a rolling update" started: 5/15/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and the ControlPlaneMachineSet is up to date and the ControlPlaneMachineSet is deleted should have the control plane machine set replicas up to date" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:46.211 STEP: Checking the control plane machine set exists @ 11/05/25 04:44:46.276 STEP: Checking the control plane machine set is active @ 11/05/25 04:44:46.287 STEP: Updating the control plane machine set strategy to OnDelete @ 11/05/25 04:44:46.309 STEP: Updating the machine name prefix of the control plane machine set to "master-prefix-on-delete" @ 11/05/25 04:44:46.478 STEP: Updating the provider spec of the control plane machine at index 1 @ 11/05/25 04:44:46.938 [PANICKED] in [It] - /usr/lib/golang/src/runtime/panic.go:262 @ 11/05/25 04:44:47.055 STEP: Updating the provider spec of the control plane machine at index 1 @ 11/05/25 04:44:47.056 STEP: Updating the control plane machine set strategy to RollingUpdate @ 11/05/25 04:44:47.133 github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e.init.func2.2.4.4.2.ItShouldNotOnDeleteReplaceTheOutdatedMachine.4() /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/cases.go:188 +0x2f fail [runtime/panic.go:262]: Test Panicked: runtime error: invalid memory address or nil pointer dereference fail [runtime/panic.go:262]: Test Panicked failed: (1.2s) 2025-11-05T04:44:47 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected should not replace the outdated machine" started: 6/16/55 "ControlPlaneMachineSet Operator With an inactive ControlPlaneMachineSet and the ControlPlaneMachineSet is up to date and there is diff in the providerSpec of the newest, alphabetically last machine should perform control plane machine set regeneration" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:46.309 STEP: Checking the control plane machine set exists @ 11/05/25 04:44:46.343 STEP: Checking the control plane machine set is active @ 11/05/25 04:44:46.357 STEP: Updating the machine name prefix of the control plane machine set to "master-prefix" @ 11/05/25 04:44:46.407 STEP: Updating the provider spec of the control plane machine at index 1 @ 11/05/25 04:44:46.616 STEP: Un-setting the machine name prefix of the control plane machine set @ 11/05/25 04:44:46.722 STEP: Updating the provider spec of the control plane machine at index 1 @ 11/05/25 04:44:47.376 [PANICKED] in [It] - /usr/lib/golang/src/runtime/panic.go:262 @ 11/05/25 04:44:47.41 github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e.init.func1.2.3.2.2.ItShouldRollingUpdateReplaceTheOutdatedMachine.2() /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/cases.go:130 +0x3e fail [runtime/panic.go:262]: Test Panicked: runtime error: invalid memory address or nil pointer dereference fail [runtime/panic.go:262]: Test Panicked failed: (1.1s) 2025-11-05T04:44:47 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected and again MachineNamePrefix is reset should rolling update replace the outdated machine" started: 7/17/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and the provider spec of index 1 is not as expected should rolling update replace the outdated machine" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:46.238 STEP: Checking the control plane machine set exists @ 11/05/25 04:44:46.284 STEP: Checking the control plane machine set is active @ 11/05/25 04:44:46.294 STEP: Removing the defaulted field from the control plane machine set @ 11/05/25 04:44:46.343 STEP: Checking the control plane machine set replicas are consistently up to date @ 11/05/25 04:44:47.088 [FAILED] in [It] - /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/cases.go:300 @ 11/05/25 04:44:47.088 STEP: Checking the control plane machine set exists @ 11/05/25 04:44:47.104 STEP: Checking the control plane machine set is active @ 11/05/25 04:44:47.121 STEP: Updating the provider spec of the control plane machine set @ 11/05/25 04:44:47.131 fail [github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/cases.go:300]: test framework should not be nil Expected : nil not to be nil failed: (1.2s) 2025-11-05T04:44:47 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and a defaulted value is deleted from the ControlPlaneMachineSet should have the control plane machine set not cause a rollout" started: 8/18/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected should rolling update replace the outdated machine" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:46.249 STEP: Checking the control plane machine set exists @ 11/05/25 04:44:46.282 STEP: Checking the control plane machine set is active @ 11/05/25 04:44:46.289 STEP: Updating the control plane machine set strategy to OnDelete @ 11/05/25 04:44:46.311 STEP: Updating the machine name prefix of the control plane machine set to "master-prefix-on-delete" @ 11/05/25 04:44:46.507 STEP: Updating the provider spec of the control plane machine at index 1 @ 11/05/25 04:44:46.833 STEP: Un-setting the machine name prefix of the control plane machine set @ 11/05/25 04:44:46.87 STEP: Updating the provider spec of the control plane machine at index 1 @ 11/05/25 04:44:47.344 [PANICKED] in [It] - /usr/lib/golang/src/runtime/panic.go:262 @ 11/05/25 04:44:47.382 STEP: Updating the provider spec of the control plane machine at index 1 @ 11/05/25 04:44:47.382 github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e.init.func2.2.4.4.2.3.ItShouldNotOnDeleteReplaceTheOutdatedMachine.2() /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/cases.go:188 +0x2f fail [runtime/panic.go:262]: Test Panicked: runtime error: invalid memory address or nil pointer dereference fail [runtime/panic.go:262]: Test Panicked failed: (1.3s) 2025-11-05T04:44:47 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected and again MachineNamePrefix is reset should not replace the outdated machine" started: 9/19/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected and again MachineNamePrefix is reset should replace the outdated machine when deleted" I1105 04:45:16.940737 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:47.373 STEP: Checking the control plane machine set exists @ 11/05/25 04:45:47.557 STEP: Checking the control plane machine set is active @ 11/05/25 04:45:47.565 STEP: Updating the control plane machine set strategy to OnDelete @ 11/05/25 04:45:47.582 STEP: Updating the provider spec of the control plane machine at index 2 @ 11/05/25 04:45:47.646 [PANICKED] in [It] - /usr/lib/golang/src/runtime/panic.go:262 @ 11/05/25 04:45:47.692 STEP: Updating the provider spec of the control plane machine at index 2 @ 11/05/25 04:45:47.692 STEP: Updating the control plane machine set strategy to RollingUpdate @ 11/05/25 04:45:47.739 github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e.init.func2.2.4.3.ItShouldOnDeleteReplaceTheOutDatedMachineWhenDeleted.4() /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/cases.go:216 +0x3e fail [runtime/panic.go:262]: Test Panicked: runtime error: invalid memory address or nil pointer dereference fail [runtime/panic.go:262]: Test Panicked failed: (1m0s) 2025-11-05T04:45:47 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and the provider spec of index 2 is not as expected should replace the outdated machine when deleted" started: 10/20/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 2 is not as expected should not replace the outdated machine" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:47.404 STEP: Checking the control plane machine set exists @ 11/05/25 04:45:47.6 STEP: Checking the control plane machine set is active @ 11/05/25 04:45:47.609 STEP: Updating the machine name prefix of the control plane machine set to "master-prefix" @ 11/05/25 04:45:47.636 STEP: Updating the provider spec of the control plane machine at index 1 @ 11/05/25 04:45:47.964 STEP: Un-setting the machine name prefix of the control plane machine set @ 11/05/25 04:45:48.007 STEP: Updating the provider spec of the control plane machine at index 1 @ 11/05/25 04:45:48.099 [PANICKED] in [It] - /usr/lib/golang/src/runtime/panic.go:262 @ 11/05/25 04:45:48.131 github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e.init.func1.2.3.2.2.ItShouldRollingUpdateReplaceTheOutdatedMachine.2() /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/cases.go:130 +0x3e fail [runtime/panic.go:262]: Test Panicked: runtime error: invalid memory address or nil pointer dereference fail [runtime/panic.go:262]: Test Panicked failed: (1m1s) 2025-11-05T04:45:48 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected and again MachineNamePrefix is reset should rolling update replace the outdated machine" started: 11/21/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and the ControlPlaneMachineSet is up to date and the ControlPlaneMachineSet is deleted should uninstall the control plane machine set without control plane machine changes" I1105 04:46:17.178620 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 04:47:17.444273 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 04:48:17.696785 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 04:49:17.908938 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T04:50:07Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:a3480c389e namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-4k7xk]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:50:07Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[daemonset:loki-promtail hmsg:4cc790605e namespace:openshift-e2e-loki]}" message="{SuccessfulCreate Created pod: loki-promtail-cxffr map[firstTimestamp:2025-11-05T04:50:06Z lastTimestamp:2025-11-05T04:50:06Z reason:SuccessfulCreate]}" time="2025-11-05T04:50:07Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:a3480c389e namespace:openshift-oauth-apiserver pod:apiserver-69c86c487b-kqms9]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:50:07Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1c7353647b namespace:openshift-e2e-loki pod:loki-promtail-cxffr]}" message="{Scheduled Successfully assigned openshift-e2e-loki/loki-promtail-cxffr to ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:Scheduled]}" time="2025-11-05T04:50:07Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:a3480c389e namespace:openshift-oauth-apiserver pod:apiserver-69c86c487b-kqms9]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:50:07Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:a3480c389e namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-4k7xk]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:50:07Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-apiserver pod:apiserver-77dcb99c96-p26vp]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:50:07Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:9b807eac9f namespace:openshift-machine-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-rbac-proxy-crio-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-machine-config-operator(6780917f0c38b9112220e1e10cab6634) map[count:4 firstTimestamp:2025-11-05T04:49:42Z lastTimestamp:2025-11-05T04:50:07Z reason:BackOff]}" time="2025-11-05T04:50:09Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-apiserver pod:apiserver-77dcb99c96-p26vp]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:50:09Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-oauth-apiserver pod:apiserver-69c86c487b-kqms9]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:50:09Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-4k7xk]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:50:11Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-apiserver pod:apiserver-77dcb99c96-p26vp]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:50:11Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-4k7xk]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:50:11Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-oauth-apiserver pod:apiserver-69c86c487b-kqms9]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:50:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:16Z reason:NetworkNotReady]}" time="2025-11-05T04:50:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:16Z reason:FailedMount]}" time="2025-11-05T04:50:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:16Z reason:FailedMount]}" time="2025-11-05T04:50:17Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:16Z reason:FailedMount]}" time="2025-11-05T04:50:17Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:f993061c1c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-29zwn\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:16Z reason:FailedMount]}" time="2025-11-05T04:50:17Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:17Z reason:FailedMount]}" time="2025-11-05T04:50:17Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:17Z reason:FailedMount]}" time="2025-11-05T04:50:17Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:17Z reason:FailedMount]}" time="2025-11-05T04:50:17Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:f993061c1c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-29zwn\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:17Z reason:FailedMount]}" I1105 04:50:18.169042 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T04:50:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:18Z reason:FailedMount]}" time="2025-11-05T04:50:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:18Z reason:FailedMount]}" time="2025-11-05T04:50:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:18Z reason:FailedMount]}" time="2025-11-05T04:50:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:f993061c1c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-29zwn\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:18Z reason:FailedMount]}" time="2025-11-05T04:50:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:18Z reason:NetworkNotReady]}" time="2025-11-05T04:50:20Z" level=info msg="event interval matches AnnotationChangeTooOften" locator="{Kind map[hmsg:4bfd4df35c machineconfigpool:master namespace:openshift-machine-config-operator]}" message="{AnnotationChange Node ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-9f98a746a10e4a27be194b3256575bcc map[firstTimestamp:2025-11-05T04:50:20Z lastTimestamp:2025-11-05T04:50:20Z reason:AnnotationChange]}" time="2025-11-05T04:50:20Z" level=info msg="event interval matches AnnotationChangeTooOften" locator="{Kind map[hmsg:5c2f8024bc machineconfigpool:master namespace:openshift-machine-config-operator]}" message="{AnnotationChange Node ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-9f98a746a10e4a27be194b3256575bcc map[firstTimestamp:2025-11-05T04:50:20Z lastTimestamp:2025-11-05T04:50:20Z reason:AnnotationChange]}" time="2025-11-05T04:50:20Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:20Z reason:FailedMount]}" time="2025-11-05T04:50:20Z" level=info msg="event interval matches AnnotationChangeTooOften" locator="{Kind map[hmsg:e54ff06f21 machineconfigpool:master namespace:openshift-machine-config-operator]}" message="{AnnotationChange Node ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 now has machineconfiguration.openshift.io/state=Done map[firstTimestamp:2025-11-05T04:50:20Z lastTimestamp:2025-11-05T04:50:20Z reason:AnnotationChange]}" time="2025-11-05T04:50:20Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:20Z reason:FailedMount]}" time="2025-11-05T04:50:20Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:20Z reason:FailedMount]}" time="2025-11-05T04:50:20Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:f993061c1c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-29zwn\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:20Z reason:FailedMount]}" time="2025-11-05T04:50:20Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:20Z reason:NetworkNotReady]}" time="2025-11-05T04:50:22Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:22Z reason:NetworkNotReady]}" time="2025-11-05T04:50:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:24Z reason:FailedMount]}" time="2025-11-05T04:50:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:24Z reason:FailedMount]}" time="2025-11-05T04:50:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:24Z reason:FailedMount]}" time="2025-11-05T04:50:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:f993061c1c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-29zwn\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:24Z reason:FailedMount]}" time="2025-11-05T04:50:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T04:50:16Z lastTimestamp:2025-11-05T04:50:24Z reason:NetworkNotReady]}" time="2025-11-05T04:50:25Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-oauth-apiserver pod:apiserver-69c86c487b-kqms9]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:50:25Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-4k7xk]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:50:25Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-apiserver pod:apiserver-77dcb99c96-p26vp]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:50:27Z" level=info msg="event interval matches CertificateRotation" locator="{Kind map[deployment:etcd-operator hmsg:ea65ad6659 namespace:openshift-etcd-operator]}" message="{TargetUpdateRequired \"etcd-peer-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" in \"openshift-etcd\" requires a new target cert/key pair: secret doesn't exist map[firstTimestamp:2025-11-05T04:50:27Z interesting:true lastTimestamp:2025-11-05T04:50:27Z reason:TargetUpdateRequired]}" time="2025-11-05T04:50:28Z" level=info msg="event interval matches CertificateRotation" locator="{Kind map[deployment:etcd-operator hmsg:1f1164a4f1 namespace:openshift-etcd-operator]}" message="{TargetUpdateRequired \"etcd-serving-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" in \"openshift-etcd\" requires a new target cert/key pair: secret doesn't exist map[firstTimestamp:2025-11-05T04:50:28Z interesting:true lastTimestamp:2025-11-05T04:50:28Z reason:TargetUpdateRequired]}" time="2025-11-05T04:50:30Z" level=info msg="event interval matches CertificateRotation" locator="{Kind map[deployment:etcd-operator hmsg:2ec6b1ed4c namespace:openshift-etcd-operator]}" message="{TargetUpdateRequired \"etcd-serving-metrics-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" in \"openshift-etcd\" requires a new target cert/key pair: secret doesn't exist map[firstTimestamp:2025-11-05T04:50:30Z interesting:true lastTimestamp:2025-11-05T04:50:30Z reason:TargetUpdateRequired]}" time="2025-11-05T04:50:51Z" level=info msg="event interval matches AnnotationChangeTooOften" locator="{Kind map[hmsg:4967a51884 machineconfigpool:master namespace:openshift-machine-config-operator]}" message="{AnnotationChange Node ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 now has machineconfiguration.openshift.io/reason= map[firstTimestamp:2025-11-05T04:50:51Z lastTimestamp:2025-11-05T04:50:51Z reason:AnnotationChange]}" time="2025-11-05T04:51:01Z" level=info msg="event interval matches CertificateRotation" locator="{Kind map[certificatesigningrequest:csr-msfmk hmsg:ce438cf36f]}" message="{CSRApproved CSR \"csr-msfmk\" has been approved map[firstTimestamp:2025-11-05T04:51:01Z interesting:true lastTimestamp:2025-11-05T04:51:01Z reason:CSRApproved]}" time="2025-11-05T04:51:07Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d215506a47 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:gcp-pd-csi-driver-node-t2ktn]}" message="{ProbeError Liveness probe error: Get \"http://10.0.0.7:10300/healthz\": dial tcp 10.0.0.7:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T04:51:07Z lastTimestamp:2025-11-05T04:51:07Z reason:ProbeError]}" time="2025-11-05T04:51:07Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:b0c0069c55 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:gcp-pd-csi-driver-node-t2ktn]}" message="{ProbeError Liveness probe error: Get \"http://10.0.0.7:10303/healthz\": dial tcp 10.0.0.7:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T04:51:07Z lastTimestamp:2025-11-05T04:51:07Z reason:ProbeError]}" time="2025-11-05T04:51:07Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:0a911ab3ae namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:gcp-pd-csi-driver-node-t2ktn]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.0.7:10300/healthz\": dial tcp 10.0.0.7:10300: connect: connection refused map[firstTimestamp:2025-11-05T04:51:07Z lastTimestamp:2025-11-05T04:51:07Z reason:Unhealthy]}" time="2025-11-05T04:51:07Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:7159873927 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:gcp-pd-csi-driver-node-t2ktn]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.0.7:10303/healthz\": dial tcp 10.0.0.7:10303: connect: connection refused map[firstTimestamp:2025-11-05T04:51:07Z lastTimestamp:2025-11-05T04:51:07Z reason:Unhealthy]}" time="2025-11-05T04:51:07Z" level=info msg="event interval matches CertificateRotation" locator="{Kind map[certificatesigningrequest:csr-cz85j hmsg:aebbb836ab]}" message="{CSRApproved CSR \"csr-cz85j\" has been approved map[firstTimestamp:2025-11-05T04:51:07Z interesting:true lastTimestamp:2025-11-05T04:51:07Z reason:CSRApproved]}" I1105 04:51:18.420187 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T04:51:27Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:f38ee35df5 namespace:openshift-e2e-loki pod:loki-promtail-cxffr]}" message="{AddedInterface Add eth0 [10.130.2.9/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T04:51:27Z lastTimestamp:2025-11-05T04:51:27Z reason:AddedInterface]}" time="2025-11-05T04:51:27Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:d1eb5763af namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{Pulling Pulling image \"quay.io/openshift-logging/promtail:v2.9.8\" map[container:promtail firstTimestamp:2025-11-05T04:51:27Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T04:51:27Z reason:Pulling]}" time="2025-11-05T04:51:28Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:1af8f753b0 namespace:openshift-authentication node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:oauth-openshift-85b9b447d5-4k7xk]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.13:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nbody: \n map[firstTimestamp:2025-11-05T04:51:28Z lastTimestamp:2025-11-05T04:51:28Z reason:ProbeError]}" time="2025-11-05T04:51:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:0724c73793 namespace:openshift-authentication node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:oauth-openshift-85b9b447d5-4k7xk]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.13:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) map[firstTimestamp:2025-11-05T04:51:28Z lastTimestamp:2025-11-05T04:51:28Z reason:Unhealthy]}" time="2025-11-05T04:51:29Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:1af8f753b0 namespace:openshift-authentication node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:oauth-openshift-85b9b447d5-4k7xk]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.13:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:2 firstTimestamp:2025-11-05T04:51:28Z lastTimestamp:2025-11-05T04:51:29Z reason:ProbeError]}" time="2025-11-05T04:51:29Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:0724c73793 namespace:openshift-authentication node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:oauth-openshift-85b9b447d5-4k7xk]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.13:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) map[count:2 firstTimestamp:2025-11-05T04:51:28Z lastTimestamp:2025-11-05T04:51:29Z reason:Unhealthy]}" time="2025-11-05T04:51:30Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:9f3023f06d namespace:openshift-authentication node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:oauth-openshift-85b9b447d5-4k7xk]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.13:6443/healthz\": context deadline exceeded map[firstTimestamp:2025-11-05T04:51:30Z lastTimestamp:2025-11-05T04:51:30Z reason:Unhealthy]}" time="2025-11-05T04:51:37Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ab9bc9857b namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{Pulled Successfully pulled image \"quay.io/openshift-logging/promtail:v2.9.8\" in 10.502s (10.502s including waiting). Image size: 478481622 bytes. map[container:promtail firstTimestamp:2025-11-05T04:51:37Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T04:51:37Z reason:Pulled]}" time="2025-11-05T04:51:37Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T04:51:37Z lastTimestamp:2025-11-05T04:51:37Z reason:Created]}" time="2025-11-05T04:51:37Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T04:51:37Z lastTimestamp:2025-11-05T04:51:37Z reason:Started]}" time="2025-11-05T04:51:37Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:6bd083e00c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{Pulling Pulling image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" map[container:oauth-proxy firstTimestamp:2025-11-05T04:51:37Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T04:51:37Z reason:Pulling]}" time="2025-11-05T04:51:43Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:2a0e3131ef namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{Pulled Successfully pulled image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" in 5.955s (5.955s including waiting). Image size: 482442792 bytes. map[container:oauth-proxy firstTimestamp:2025-11-05T04:51:43Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T04:51:43Z reason:Pulled]}" time="2025-11-05T04:51:44Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T04:51:44Z lastTimestamp:2025-11-05T04:51:44Z reason:Created]}" time="2025-11-05T04:51:44Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T04:51:44Z lastTimestamp:2025-11-05T04:51:44Z reason:Started]}" time="2025-11-05T04:51:44Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T04:51:44Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T04:51:44Z reason:Pulling]}" time="2025-11-05T04:51:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:2417470bb2 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 4.263s (4.263s including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T04:51:48Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T04:51:48Z reason:Pulled]}" time="2025-11-05T04:51:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T04:51:48Z lastTimestamp:2025-11-05T04:51:48Z reason:Created]}" time="2025-11-05T04:51:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:loki-promtail-cxffr]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T04:51:48Z lastTimestamp:2025-11-05T04:51:48Z reason:Started]}" time="2025-11-05T04:51:58Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:51:59Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:00Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:01Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:02Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:03Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:04Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:05Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:06Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:07Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:07Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:07Z reason:ProbeError]}" time="2025-11-05T04:52:07Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:07Z reason:Unhealthy]}" time="2025-11-05T04:52:08Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:08Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:08Z reason:ProbeError]}" time="2025-11-05T04:52:08Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:2 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:08Z reason:Unhealthy]}" time="2025-11-05T04:52:09Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:10Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:11Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:12Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:13Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:13Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:13Z reason:ProbeError]}" time="2025-11-05T04:52:13Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:3 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:13Z reason:Unhealthy]}" time="2025-11-05T04:52:14Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:15Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:16Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:17Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" I1105 04:52:18.639234 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T04:52:18Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:18Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:18Z reason:ProbeError]}" time="2025-11-05T04:52:18Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:4 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:18Z reason:Unhealthy]}" time="2025-11-05T04:52:19Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:19Z reason:ProbeError]}" time="2025-11-05T04:52:19Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:5 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:19Z reason:Unhealthy]}" time="2025-11-05T04:52:19Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:20Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:21Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:22Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:23Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:24Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T04:52:24Z reason:ProbeError]}" time="2025-11-05T04:52:24Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T04:52:24Z reason:Unhealthy]}" time="2025-11-05T04:52:24Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:24Z reason:ProbeError]}" time="2025-11-05T04:52:24Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:6 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:24Z reason:Unhealthy]}" time="2025-11-05T04:52:24Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:25Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T04:52:25Z reason:ProbeError]}" time="2025-11-05T04:52:25Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:2 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T04:52:25Z reason:Unhealthy]}" time="2025-11-05T04:52:25Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:26Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T04:52:26Z reason:ProbeError]}" time="2025-11-05T04:52:26Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:3 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T04:52:26Z reason:Unhealthy]}" time="2025-11-05T04:52:26Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:27Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T04:52:27Z reason:ProbeError]}" time="2025-11-05T04:52:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:4 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T04:52:27Z reason:Unhealthy]}" time="2025-11-05T04:52:27Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:28Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:29Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:7 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:29Z reason:ProbeError]}" time="2025-11-05T04:52:29Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:7 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:29Z reason:Unhealthy]}" time="2025-11-05T04:52:29Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:30Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:31Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:32Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T04:52:32Z lastTimestamp:2025-11-05T04:52:32Z reason:ProbeError]}" time="2025-11-05T04:52:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[firstTimestamp:2025-11-05T04:52:32Z lastTimestamp:2025-11-05T04:52:32Z reason:Unhealthy]}" time="2025-11-05T04:52:32Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:33Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:34Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:8 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:34Z reason:ProbeError]}" time="2025-11-05T04:52:34Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:8 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T04:52:34Z reason:Unhealthy]}" time="2025-11-05T04:52:34Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:35Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:36Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T04:52:38Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:eef7a0f66f namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(1999142741a516c8879c614b4ee8c47f) map[firstTimestamp:2025-11-05T04:52:38Z lastTimestamp:2025-11-05T04:52:38Z reason:BackOff]}" time="2025-11-05T04:52:39Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:eef7a0f66f namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(1999142741a516c8879c614b4ee8c47f) map[count:2 firstTimestamp:2025-11-05T04:52:38Z lastTimestamp:2025-11-05T04:52:39Z reason:BackOff]}" time="2025-11-05T04:52:40Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:eef7a0f66f namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(1999142741a516c8879c614b4ee8c47f) map[count:3 firstTimestamp:2025-11-05T04:52:38Z lastTimestamp:2025-11-05T04:52:40Z reason:BackOff]}" time="2025-11-05T04:52:44Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:74091a054f namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nbody: \n map[firstTimestamp:2025-11-05T04:52:44Z lastTimestamp:2025-11-05T04:52:44Z reason:ProbeError]}" time="2025-11-05T04:52:44Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:1ac69da92c namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers) map[firstTimestamp:2025-11-05T04:52:44Z lastTimestamp:2025-11-05T04:52:44Z reason:Unhealthy]}" time="2025-11-05T04:52:47Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{ map[hmsg:4fbf5bece8 namespace:openshift-kube-scheduler]}" message="{ControlPlaneTopology unable to get control plane topology, using HA cluster values for leader election: Get \"https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster\": dial tcp [::1]:6443: connect: connection refused map[firstTimestamp:2025-11-05T04:52:33Z lastTimestamp:2025-11-05T04:52:33Z reason:ControlPlaneTopology]}" time="2025-11-05T04:52:49Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:6794c43155 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": context deadline exceeded\nbody: \n map[firstTimestamp:2025-11-05T04:52:49Z lastTimestamp:2025-11-05T04:52:49Z reason:ProbeError]}" time="2025-11-05T04:52:49Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:86048ad0e7 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": context deadline exceeded map[firstTimestamp:2025-11-05T04:52:49Z lastTimestamp:2025-11-05T04:52:49Z reason:Unhealthy]}" time="2025-11-05T04:52:51Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{ map[hmsg:4fbf5bece8 namespace:openshift-kube-controller-manager]}" message="{ControlPlaneTopology unable to get control plane topology, using HA cluster values for leader election: Get \"https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster\": dial tcp [::1]:6443: connect: connection refused map[firstTimestamp:2025-11-05T04:52:37Z lastTimestamp:2025-11-05T04:52:37Z reason:ControlPlaneTopology]}" time="2025-11-05T04:52:54Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:90427cd033 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[firstTimestamp:2025-11-05T04:52:54Z lastTimestamp:2025-11-05T04:52:54Z reason:ProbeError]}" time="2025-11-05T04:52:54Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:5a2023a0f5 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers) map[firstTimestamp:2025-11-05T04:52:54Z lastTimestamp:2025-11-05T04:52:54Z reason:Unhealthy]}" time="2025-11-05T04:52:59Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:90427cd033 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:2 firstTimestamp:2025-11-05T04:52:54Z lastTimestamp:2025-11-05T04:52:59Z reason:ProbeError]}" time="2025-11-05T04:52:59Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:5a2023a0f5 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers) map[count:2 firstTimestamp:2025-11-05T04:52:54Z lastTimestamp:2025-11-05T04:52:59Z reason:Unhealthy]}" time="2025-11-05T04:53:04Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:90427cd033 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:3 firstTimestamp:2025-11-05T04:52:54Z lastTimestamp:2025-11-05T04:53:04Z reason:ProbeError]}" time="2025-11-05T04:53:10Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:224d8053ec namespace:openshift-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:controller-manager-6848447799-2sgkx]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.23:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nbody: \n map[firstTimestamp:2025-11-05T04:53:10Z lastTimestamp:2025-11-05T04:53:10Z reason:ProbeError]}" time="2025-11-05T04:53:10Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:b20ab9597e namespace:openshift-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:controller-manager-6848447799-2sgkx]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.23:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers) map[firstTimestamp:2025-11-05T04:53:10Z lastTimestamp:2025-11-05T04:53:10Z reason:Unhealthy]}" I1105 04:53:18.872622 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T04:53:32Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:1be95bc7bf namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.6:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:12 firstTimestamp:2025-11-05T04:05:04Z lastTimestamp:2025-11-05T04:53:32Z reason:ProbeError]}" time="2025-11-05T04:53:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:91b214014c namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.6:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers) map[count:12 firstTimestamp:2025-11-05T04:05:04Z lastTimestamp:2025-11-05T04:53:32Z reason:Unhealthy]}" time="2025-11-05T04:53:37Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:c28577f968 namespace:openshift-oauth-apiserver pod:apiserver-5b4bf4cf7c-vnccd]}" message="{FailedScheduling 0/7 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 4 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 Preemption is not helpful for scheduling, 4 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:53:37Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:1be95bc7bf namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.6:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:13 firstTimestamp:2025-11-05T04:05:04Z lastTimestamp:2025-11-05T04:53:37Z reason:ProbeError]}" time="2025-11-05T04:53:37Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:91b214014c namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.6:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers) map[count:13 firstTimestamp:2025-11-05T04:05:04Z lastTimestamp:2025-11-05T04:53:37Z reason:Unhealthy]}" time="2025-11-05T04:53:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-69c86c487b-kqms9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T04:53:40Z lastTimestamp:2025-11-05T04:53:40Z reason:Unhealthy]}" time="2025-11-05T04:53:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-69c86c487b-kqms9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T04:53:40Z lastTimestamp:2025-11-05T04:53:45Z reason:Unhealthy]}" time="2025-11-05T04:53:50Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-69c86c487b-kqms9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T04:53:40Z lastTimestamp:2025-11-05T04:53:50Z reason:Unhealthy]}" time="2025-11-05T04:53:55Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-69c86c487b-kqms9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T04:53:40Z lastTimestamp:2025-11-05T04:53:55Z reason:Unhealthy]}" time="2025-11-05T04:54:00Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-oauth-apiserver pod:apiserver-5b4bf4cf7c-vnccd]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:54:00Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-69c86c487b-kqms9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T04:53:40Z lastTimestamp:2025-11-05T04:54:00Z reason:Unhealthy]}" time="2025-11-05T04:54:01Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-b74br]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:54:01Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:98616391a0 namespace:openshift-e2e-loki replicaset:event-exporter-6cdb995667]}" message="{SuccessfulCreate Created pod: event-exporter-6cdb995667-tfsht map[firstTimestamp:2025-11-05T04:54:01Z lastTimestamp:2025-11-05T04:54:01Z reason:SuccessfulCreate]}" time="2025-11-05T04:54:01Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:87512b80ce namespace:openshift-e2e-loki pod:event-exporter-6cdb995667-tfsht]}" message="{Scheduled Successfully assigned openshift-e2e-loki/event-exporter-6cdb995667-tfsht to ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:Scheduled]}" time="2025-11-05T04:54:01Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-controller-manager pod:controller-manager-6848447799-p7xgz]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:54:01Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:033f2a4b2c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:event-exporter-6cdb995667-gsvfz]}" message="{Killing Stopping container event-exporter map[container:event-exporter firstTimestamp:2025-11-05T04:54:01Z lastTimestamp:2025-11-05T04:54:01Z reason:Killing]}" time="2025-11-05T04:54:01Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-psldw]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:54:01Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-apiserver pod:apiserver-77dcb99c96-p6k88]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:54:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:310abc2410 namespace:openshift-e2e-loki pod:event-exporter-6cdb995667-tfsht]}" message="{AddedInterface Add eth0 [10.130.2.26/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T04:54:02Z lastTimestamp:2025-11-05T04:54:02Z reason:AddedInterface]}" time="2025-11-05T04:54:02Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-b74br]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:54:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:d7cc444545 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:event-exporter-6cdb995667-tfsht]}" message="{Pulling Pulling image \"ghcr.io/opsgenie/kubernetes-event-exporter:v0.11\" map[container:event-exporter firstTimestamp:2025-11-05T04:54:02Z image:ghcr.io/opsgenie/kubernetes-event-exporter:v0.11 lastTimestamp:2025-11-05T04:54:02Z reason:Pulling]}" time="2025-11-05T04:54:04Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-controller-manager pod:controller-manager-6848447799-p7xgz]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:54:05Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-69c86c487b-kqms9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T04:53:40Z lastTimestamp:2025-11-05T04:54:05Z reason:Unhealthy]}" time="2025-11-05T04:54:06Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-77dcb99c96-4rp8q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T04:54:06Z lastTimestamp:2025-11-05T04:54:06Z reason:Unhealthy]}" time="2025-11-05T04:54:08Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-apiserver pod:apiserver-65f46c49b8-z45xg]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:54:10Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-69c86c487b-kqms9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T04:53:40Z lastTimestamp:2025-11-05T04:54:10Z reason:Unhealthy]}" time="2025-11-05T04:54:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-77dcb99c96-4rp8q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T04:54:06Z lastTimestamp:2025-11-05T04:54:11Z reason:Unhealthy]}" time="2025-11-05T04:54:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-69c86c487b-kqms9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T04:53:40Z lastTimestamp:2025-11-05T04:54:15Z reason:Unhealthy]}" time="2025-11-05T04:54:16Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-77dcb99c96-4rp8q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T04:54:06Z lastTimestamp:2025-11-05T04:54:16Z reason:Unhealthy]}" I1105 04:54:19.109588 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T04:54:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-69c86c487b-kqms9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T04:53:40Z lastTimestamp:2025-11-05T04:54:20Z reason:Unhealthy]}" time="2025-11-05T04:54:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-77dcb99c96-4rp8q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T04:54:06Z lastTimestamp:2025-11-05T04:54:21Z reason:Unhealthy]}" time="2025-11-05T04:54:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-69c86c487b-kqms9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T04:53:40Z lastTimestamp:2025-11-05T04:54:25Z reason:Unhealthy]}" time="2025-11-05T04:54:26Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-77dcb99c96-4rp8q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T04:54:06Z lastTimestamp:2025-11-05T04:54:26Z reason:Unhealthy]}" time="2025-11-05T04:54:27Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-psldw]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:54:28Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:0f3e7cacf6 namespace:openshift-authentication node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:oauth-openshift-85b9b447d5-fzrnf]}" message="{ProbeError Readiness probe error: Get \"https://10.130.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nbody: \n map[firstTimestamp:2025-11-05T04:54:28Z lastTimestamp:2025-11-05T04:54:28Z reason:ProbeError]}" time="2025-11-05T04:54:28Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:2348e447c7 namespace:openshift-authentication node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:oauth-openshift-85b9b447d5-fzrnf]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.0.63:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) map[firstTimestamp:2025-11-05T04:54:28Z lastTimestamp:2025-11-05T04:54:28Z reason:Unhealthy]}" time="2025-11-05T04:54:30Z" level=info msg="event interval matches ProbeErrorConnectionRefused" locator="{Kind map[hmsg:86b06cc01b namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-69c86c487b-kqms9]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.14:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.14:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T04:54:30Z lastTimestamp:2025-11-05T04:54:30Z reason:ProbeError]}" time="2025-11-05T04:54:30Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:67f950458b namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-69c86c487b-kqms9]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.14:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.14:8443: connect: connection refused map[firstTimestamp:2025-11-05T04:54:30Z lastTimestamp:2025-11-05T04:54:30Z reason:Unhealthy]}" time="2025-11-05T04:54:31Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-77dcb99c96-4rp8q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T04:54:06Z lastTimestamp:2025-11-05T04:54:31Z reason:Unhealthy]}" time="2025-11-05T04:54:35Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:a133cfa522 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:event-exporter-6cdb995667-tfsht]}" message="{Pulled Successfully pulled image \"ghcr.io/opsgenie/kubernetes-event-exporter:v0.11\" in 32.592s (32.592s including waiting). Image size: 99725566 bytes. map[container:event-exporter firstTimestamp:2025-11-05T04:54:35Z image:ghcr.io/opsgenie/kubernetes-event-exporter:v0.11 lastTimestamp:2025-11-05T04:54:35Z reason:Pulled]}" time="2025-11-05T04:54:35Z" level=info msg="event interval matches ProbeErrorConnectionRefused" locator="{Kind map[hmsg:86b06cc01b namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-69c86c487b-kqms9]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.14:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.14:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T04:54:30Z lastTimestamp:2025-11-05T04:54:35Z reason:ProbeError]}" time="2025-11-05T04:54:35Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:67f950458b namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-69c86c487b-kqms9]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.14:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.14:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T04:54:30Z lastTimestamp:2025-11-05T04:54:35Z reason:Unhealthy]}" time="2025-11-05T04:54:36Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-77dcb99c96-4rp8q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T04:54:06Z lastTimestamp:2025-11-05T04:54:36Z reason:Unhealthy]}" time="2025-11-05T04:54:41Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-77dcb99c96-4rp8q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T04:54:06Z lastTimestamp:2025-11-05T04:54:41Z reason:Unhealthy]}" time="2025-11-05T04:54:46Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-77dcb99c96-4rp8q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T04:54:06Z lastTimestamp:2025-11-05T04:54:46Z reason:Unhealthy]}" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:47.502 [FAILED] in [BeforeEach] - /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44 @ 11/05/25 04:54:47.511 fail [github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44]: Timed out after 600.008s. cluster operators should all be available, not progressing and not degraded Value for field 'Items' failed to satisfy matcher. Expected <[]v1.ClusterOperator | len:34, cap:65>: : { Message: "Cluster operators [authentication control-plane-machine-set csi-snapshot-controller etcd kube-apiserver network olm openshift-apiserver openshift-controller-manager storage] are either not available, are progressing or are degraded.", ClusterOperators: [ { Name: "authentication", Conditions: [ { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:24:22Z, }, Reason: "AsExpected", Message: "OAuthServerDeploymentDegraded: 1 of 4 requested instances are unavailable for oauth-openshift.openshift-authentication ()", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:53:36Z, }, Reason: "APIServerDeployment_PodsUpdating", Message: "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/4 pods have been updated to the latest generation and 3/4 pods are available", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:32:42Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "EvaluationConditionsDetected", Status: "Unknown", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "NoData", Message: "", }, ], }, { Name: "control-plane-machine-set", Conditions: [ { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AsExpected", Message: "cluster operator is upgradable", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AllReplicasAvailable", Message: "", }, { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "AsExpected", Message: "", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "NeedsUpdateReplicas", Message: "Observed 1 replica(s) in need of update", }, ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output to contain element matching <*matchers.HaveFieldMatcher | 0xc0004f0de0>: { Field: "Status.Conditions", Expected: <*matchers.AndMatcher | 0xc00043a6f0>{ Matchers: [ <*matchers.ContainElementMatcher | 0xc00043a510>{ Element: <*matchers.AndMatcher | 0xc00043a4e0>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc0004f0cc0>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc000390ed0>{ Expected: "Available", }, }, <*matchers.HaveFieldMatcher | 0xc0004f0ce0>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc000390ee0>{ Expected: "True", }, }, <*matchers.HaveFieldMatcher | 0xc0004f0d00>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc000161700>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc00043a480>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1317033, }, }, ], firstFailedMatcher: nil, }, Result: nil, }, <*matchers.ContainElementMatcher | 0xc00043a5d0>{ Element: <*matchers.AndMatcher | 0xc00043a5a0>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc0004f0d20>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc000390f00>{ Expected: "Progressing", }, }, <*matchers.HaveFieldMatcher | 0xc0004f0d40>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc000390f10>{ Expected: "False", }, }, <*matchers.HaveFieldMatcher | 0xc0004f0d60>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc000161740>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc00043a540>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1344864, }, }, ], firstFailedMatcher: <*matchers.HaveFieldMatcher | 0xc0004f0d20>{ Field: "Type", Expected: <*matc... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output failed: (10m0s) 2025-11-05T04:54:47 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected should replace the outdated machine when deleted" started: 12/22/55 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 2 is not as expected and again MachineNamePrefix is reset should not replace the outdated machine" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:47.765 [FAILED] in [BeforeEach] - /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44 @ 11/05/25 04:54:47.774 fail [github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44]: Timed out after 600.007s. cluster operators should all be available, not progressing and not degraded Value for field 'Items' failed to satisfy matcher. Expected <[]v1.ClusterOperator | len:34, cap:65>: : { Message: "Cluster operators [authentication control-plane-machine-set csi-snapshot-controller etcd kube-apiserver network olm openshift-apiserver openshift-controller-manager storage] are either not available, are progressing or are degraded.", ClusterOperators: [ { Name: "authentication", Conditions: [ { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:24:22Z, }, Reason: "AsExpected", Message: "OAuthServerDeploymentDegraded: 1 of 4 requested instances are unavailable for oauth-openshift.openshift-authentication ()", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:53:36Z, }, Reason: "APIServerDeployment_PodsUpdating", Message: "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/4 pods have been updated to the latest generation and 3/4 pods are available", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:32:42Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "EvaluationConditionsDetected", Status: "Unknown", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "NoData", Message: "", }, ], }, { Name: "control-plane-machine-set", Conditions: [ { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AsExpected", Message: "cluster operator is upgradable", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AllReplicasAvailable", Message: "", }, { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "AsExpected", Message: "", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "NeedsUpdateReplicas", Message: "Observed 1 replica(s) in need of update", }, ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output to contain element matching <*matchers.HaveFieldMatcher | 0xc00061a240>: { Field: "Status.Conditions", Expected: <*matchers.AndMatcher | 0xc0006b2420>{ Matchers: [ <*matchers.ContainElementMatcher | 0xc0006b2240>{ Element: <*matchers.AndMatcher | 0xc0006b2210>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc00061a100>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc0003904a0>{ Expected: "Available", }, }, <*matchers.HaveFieldMatcher | 0xc00061a140>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc0003904b0>{ Expected: "True", }, }, <*matchers.HaveFieldMatcher | 0xc00061a160>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc0003b43c0>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc0006b21b0>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1317273, }, }, ], firstFailedMatcher: nil, }, Result: nil, }, <*matchers.ContainElementMatcher | 0xc0006b2300>{ Element: <*matchers.AndMatcher | 0xc0006b22d0>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc00061a180>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc0003904e0>{ Expected: "Progressing", }, }, <*matchers.HaveFieldMatcher | 0xc00061a1a0>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc0003904f0>{ Expected: "False", }, }, <*matchers.HaveFieldMatcher | 0xc00061a1c0>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc0003b4400>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc0006b2270>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1345092, }, }, ], firstFailedMatcher: <*matchers.HaveFieldMatcher | 0xc00061a180>{ Field: "Type", Expected: <*matc... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output failed: (10m0s) 2025-11-05T04:54:47 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and the ControlPlaneMachineSet is up to date and the ControlPlaneMachineSet is deleted should have the control plane machine set replicas up to date" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:47.763 [FAILED] in [BeforeEach] - /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44 @ 11/05/25 04:54:47.773 fail [github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44]: Timed out after 600.008s. cluster operators should all be available, not progressing and not degraded Value for field 'Items' failed to satisfy matcher. Expected <[]v1.ClusterOperator | len:34, cap:65>: : { Message: "Cluster operators [authentication control-plane-machine-set csi-snapshot-controller etcd kube-apiserver network olm openshift-apiserver openshift-controller-manager storage] are either not available, are progressing or are degraded.", ClusterOperators: [ { Name: "authentication", Conditions: [ { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:24:22Z, }, Reason: "AsExpected", Message: "OAuthServerDeploymentDegraded: 1 of 4 requested instances are unavailable for oauth-openshift.openshift-authentication ()", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:53:36Z, }, Reason: "APIServerDeployment_PodsUpdating", Message: "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/4 pods have been updated to the latest generation and 3/4 pods are available", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:32:42Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "EvaluationConditionsDetected", Status: "Unknown", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "NoData", Message: "", }, ], }, { Name: "control-plane-machine-set", Conditions: [ { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AsExpected", Message: "cluster operator is upgradable", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AllReplicasAvailable", Message: "", }, { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "AsExpected", Message: "", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "NeedsUpdateReplicas", Message: "Observed 1 replica(s) in need of update", }, ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output to contain element matching <*matchers.HaveFieldMatcher | 0xc0002cd360>: { Field: "Status.Conditions", Expected: <*matchers.AndMatcher | 0xc0005586f0>{ Matchers: [ <*matchers.ContainElementMatcher | 0xc000558510>{ Element: <*matchers.AndMatcher | 0xc0005584e0>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc0002cd040>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc000618f10>{ Expected: "Available", }, }, <*matchers.HaveFieldMatcher | 0xc0002cd080>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc000618f20>{ Expected: "True", }, }, <*matchers.HaveFieldMatcher | 0xc0002cd0c0>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc000759b40>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc000558480>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1317168, }, }, ], firstFailedMatcher: nil, }, Result: nil, }, <*matchers.ContainElementMatcher | 0xc0005585d0>{ Element: <*matchers.AndMatcher | 0xc0005585a0>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc0002cd0e0>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc000618f40>{ Expected: "Progressing", }, }, <*matchers.HaveFieldMatcher | 0xc0002cd100>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc000618f50>{ Expected: "False", }, }, <*matchers.HaveFieldMatcher | 0xc0002cd180>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc000759b80>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc000558540>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1345007, }, }, ], firstFailedMatcher: <*matchers.HaveFieldMatcher | 0xc0002cd0e0>{ Field: "Type", Expected: <*matc... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output failed: (10m0s) 2025-11-05T04:54:47 "ControlPlaneMachineSet Operator With an inactive ControlPlaneMachineSet and the ControlPlaneMachineSet is up to date and there is diff in the providerSpec of the newest, alphabetically last machine should perform control plane machine set regeneration" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:47.854 [FAILED] in [BeforeEach] - /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44 @ 11/05/25 04:54:47.863 fail [github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44]: Timed out after 600.007s. cluster operators should all be available, not progressing and not degraded Value for field 'Items' failed to satisfy matcher. Expected <[]v1.ClusterOperator | len:34, cap:65>: : { Message: "Cluster operators [authentication control-plane-machine-set csi-snapshot-controller etcd kube-apiserver network olm openshift-apiserver openshift-controller-manager storage] are either not available, are progressing or are degraded.", ClusterOperators: [ { Name: "authentication", Conditions: [ { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:24:22Z, }, Reason: "AsExpected", Message: "OAuthServerDeploymentDegraded: 1 of 4 requested instances are unavailable for oauth-openshift.openshift-authentication ()", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:53:36Z, }, Reason: "APIServerDeployment_PodsUpdating", Message: "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/4 pods have been updated to the latest generation and 3/4 pods are available", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:32:42Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "EvaluationConditionsDetected", Status: "Unknown", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "NoData", Message: "", }, ], }, { Name: "control-plane-machine-set", Conditions: [ { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AsExpected", Message: "cluster operator is upgradable", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AllReplicasAvailable", Message: "", }, { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "AsExpected", Message: "", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "NeedsUpdateReplicas", Message: "Observed 1 replica(s) in need of update", }, ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output to contain element matching <*matchers.HaveFieldMatcher | 0xc000331d60>: { Field: "Status.Conditions", Expected: <*matchers.AndMatcher | 0xc0003b46f0>{ Matchers: [ <*matchers.ContainElementMatcher | 0xc0003b4510>{ Element: <*matchers.AndMatcher | 0xc0003b44e0>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc000331920>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc000644120>{ Expected: "Available", }, }, <*matchers.HaveFieldMatcher | 0xc000331940>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc000644130>{ Expected: "True", }, }, <*matchers.HaveFieldMatcher | 0xc000331960>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc00038c700>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc0003b4480>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1317382, }, }, ], firstFailedMatcher: nil, }, Result: nil, }, <*matchers.ContainElementMatcher | 0xc0003b45d0>{ Element: <*matchers.AndMatcher | 0xc0003b45a0>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc000331c60>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc000644150>{ Expected: "Progressing", }, }, <*matchers.HaveFieldMatcher | 0xc000331ca0>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc000644160>{ Expected: "False", }, }, <*matchers.HaveFieldMatcher | 0xc000331cc0>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc00038c740>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc0003b4540>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1345220, }, }, ], firstFailedMatcher: <*matchers.HaveFieldMatcher | 0xc000331c60>{ Field: "Type", Expected: <*matc... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output failed: (10m0s) 2025-11-05T04:54:47 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and the provider spec of index 1 is not as expected should rolling update replace the outdated machine" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:47.926 [FAILED] in [BeforeEach] - /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44 @ 11/05/25 04:54:47.928 fail [github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44]: Timed out after 600.000s. cluster operators should all be available, not progressing and not degraded Value for field 'Items' failed to satisfy matcher. Expected <[]v1.ClusterOperator | len:34, cap:65>: : { Message: "Cluster operators [authentication control-plane-machine-set csi-snapshot-controller etcd kube-apiserver network olm openshift-apiserver openshift-controller-manager storage] are either not available, are progressing or are degraded.", ClusterOperators: [ { Name: "authentication", Conditions: [ { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:24:22Z, }, Reason: "AsExpected", Message: "OAuthServerDeploymentDegraded: 1 of 4 requested instances are unavailable for oauth-openshift.openshift-authentication ()", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:53:36Z, }, Reason: "APIServerDeployment_PodsUpdating", Message: "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/4 pods have been updated to the latest generation and 3/4 pods are available", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:32:42Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "EvaluationConditionsDetected", Status: "Unknown", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "NoData", Message: "", }, ], }, { Name: "control-plane-machine-set", Conditions: [ { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AsExpected", Message: "cluster operator is upgradable", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AllReplicasAvailable", Message: "", }, { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "AsExpected", Message: "", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "NeedsUpdateReplicas", Message: "Observed 1 replica(s) in need of update", }, ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output to contain element matching <*matchers.HaveFieldMatcher | 0xc000330be0>: { Field: "Status.Conditions", Expected: <*matchers.AndMatcher | 0xc0007010e0>{ Matchers: [ <*matchers.ContainElementMatcher | 0xc000700f00>{ Element: <*matchers.AndMatcher | 0xc000700ed0>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc000330ac0>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc0002b3590>{ Expected: "Available", }, }, <*matchers.HaveFieldMatcher | 0xc000330ae0>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc0002b35a0>{ Expected: "True", }, }, <*matchers.HaveFieldMatcher | 0xc000330b00>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc000316080>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc000700e40>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1317465, }, }, ], firstFailedMatcher: nil, }, Result: nil, }, <*matchers.ContainElementMatcher | 0xc000700fc0>{ Element: <*matchers.AndMatcher | 0xc000700f90>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc000330b20>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc0002b35d0>{ Expected: "Progressing", }, }, <*matchers.HaveFieldMatcher | 0xc000330b40>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc0002b35e0>{ Expected: "False", }, }, <*matchers.HaveFieldMatcher | 0xc000330b60>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc000317280>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc000700f30>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1345296, }, }, ], firstFailedMatcher: <*matchers.HaveFieldMatcher | 0xc000330b20>{ Field: "Type", Expected: <*matc... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output failed: (10m0s) 2025-11-05T04:54:47 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected and again MachineNamePrefix is reset should replace the outdated machine when deleted" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:47.921 [FAILED] in [BeforeEach] - /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44 @ 11/05/25 04:54:47.932 fail [github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44]: Timed out after 600.007s. cluster operators should all be available, not progressing and not degraded Value for field 'Items' failed to satisfy matcher. Expected <[]v1.ClusterOperator | len:34, cap:65>: : { Message: "Cluster operators [authentication control-plane-machine-set csi-snapshot-controller etcd kube-apiserver network olm openshift-apiserver openshift-controller-manager storage] are either not available, are progressing or are degraded.", ClusterOperators: [ { Name: "authentication", Conditions: [ { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:24:22Z, }, Reason: "AsExpected", Message: "OAuthServerDeploymentDegraded: 1 of 4 requested instances are unavailable for oauth-openshift.openshift-authentication ()", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:53:36Z, }, Reason: "APIServerDeployment_PodsUpdating", Message: "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/4 pods have been updated to the latest generation and 3/4 pods are available", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:32:42Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "EvaluationConditionsDetected", Status: "Unknown", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "NoData", Message: "", }, ], }, { Name: "control-plane-machine-set", Conditions: [ { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AsExpected", Message: "cluster operator is upgradable", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AllReplicasAvailable", Message: "", }, { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "AsExpected", Message: "", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "NeedsUpdateReplicas", Message: "Observed 1 replica(s) in need of update", }, ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output to contain element matching <*matchers.HaveFieldMatcher | 0xc0002a9f60>: { Field: "Status.Conditions", Expected: <*matchers.AndMatcher | 0xc0005bc6c0>{ Matchers: [ <*matchers.ContainElementMatcher | 0xc0005bc4e0>{ Element: <*matchers.AndMatcher | 0xc0005bc4b0>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc0002a9e00>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc00007d020>{ Expected: "Available", }, }, <*matchers.HaveFieldMatcher | 0xc0002a9e20>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc00007d030>{ Expected: "True", }, }, <*matchers.HaveFieldMatcher | 0xc0002a9e80>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc000461d00>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc0005bc450>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1317490, }, }, ], firstFailedMatcher: nil, }, Result: nil, }, <*matchers.ContainElementMatcher | 0xc0005bc5a0>{ Element: <*matchers.AndMatcher | 0xc0005bc570>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc0002a9ea0>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc00007d050>{ Expected: "Progressing", }, }, <*matchers.HaveFieldMatcher | 0xc0002a9ec0>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc00007d060>{ Expected: "False", }, }, <*matchers.HaveFieldMatcher | 0xc0002a9ee0>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc000461d40>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc0005bc510>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1345331, }, }, ], firstFailedMatcher: <*matchers.HaveFieldMatcher | 0xc0002a9ea0>{ Field: "Type", Expected: <*matc... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output failed: (10m0s) 2025-11-05T04:54:47 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 1 is not as expected should rolling update replace the outdated machine" time="2025-11-05T04:54:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-77dcb99c96-4rp8q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T04:54:06Z lastTimestamp:2025-11-05T04:54:51Z reason:Unhealthy]}" time="2025-11-05T04:54:55Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:2c2f0869c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:event-exporter-6cdb995667-tfsht]}" message="{Created Created container: event-exporter map[firstTimestamp:2025-11-05T04:54:55Z lastTimestamp:2025-11-05T04:54:55Z reason:Created]}" time="2025-11-05T04:54:56Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:90a489cdea namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:event-exporter-6cdb995667-tfsht]}" message="{Started Started container event-exporter map[firstTimestamp:2025-11-05T04:54:56Z lastTimestamp:2025-11-05T04:54:56Z reason:Started]}" time="2025-11-05T04:54:56Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:158d85535e namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-77dcb99c96-4rp8q]}" message="{ProbeError Readiness probe error: Get \"https://10.130.0.74:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.74:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T04:54:56Z lastTimestamp:2025-11-05T04:54:56Z reason:ProbeError]}" time="2025-11-05T04:54:56Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:70031db61f namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-77dcb99c96-4rp8q]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.0.74:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.74:8443: connect: connection refused map[firstTimestamp:2025-11-05T04:54:56Z lastTimestamp:2025-11-05T04:54:56Z reason:Unhealthy]}" time="2025-11-05T04:54:59Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:debe66d1a1 namespace:openshift-console node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:downloads-75d55c5477-vjsrv]}" message="{ProbeError Readiness probe error: Get \"http://10.130.2.32:8080/\": dial tcp 10.130.2.32:8080: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T04:54:58Z lastTimestamp:2025-11-05T04:54:58Z reason:ProbeError]}" time="2025-11-05T04:55:00Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:0193112edb namespace:openshift-console node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:downloads-75d55c5477-vjsrv]}" message="{Unhealthy Readiness probe failed: Get \"http://10.130.2.32:8080/\": dial tcp 10.130.2.32:8080: connect: connection refused map[firstTimestamp:2025-11-05T04:54:58Z lastTimestamp:2025-11-05T04:54:58Z reason:Unhealthy]}" time="2025-11-05T04:55:01Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:debe66d1a1 namespace:openshift-console node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:downloads-75d55c5477-vjsrv]}" message="{ProbeError Readiness probe error: Get \"http://10.130.2.32:8080/\": dial tcp 10.130.2.32:8080: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T04:54:58Z lastTimestamp:2025-11-05T04:55:00Z reason:ProbeError]}" time="2025-11-05T04:55:01Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:158d85535e namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-77dcb99c96-4rp8q]}" message="{ProbeError Readiness probe error: Get \"https://10.130.0.74:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.74:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T04:54:56Z lastTimestamp:2025-11-05T04:55:01Z reason:ProbeError]}" time="2025-11-05T04:55:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:70031db61f namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-77dcb99c96-4rp8q]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.0.74:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.74:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T04:54:56Z lastTimestamp:2025-11-05T04:55:01Z reason:Unhealthy]}" time="2025-11-05T04:55:02Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:0193112edb namespace:openshift-console node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:downloads-75d55c5477-vjsrv]}" message="{Unhealthy Readiness probe failed: Get \"http://10.130.2.32:8080/\": dial tcp 10.130.2.32:8080: connect: connection refused map[count:2 firstTimestamp:2025-11-05T04:54:58Z lastTimestamp:2025-11-05T04:55:00Z reason:Unhealthy]}" time="2025-11-05T04:55:02Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:debe66d1a1 namespace:openshift-console node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:downloads-75d55c5477-vjsrv]}" message="{ProbeError Readiness probe error: Get \"http://10.130.2.32:8080/\": dial tcp 10.130.2.32:8080: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T04:54:58Z lastTimestamp:2025-11-05T04:55:01Z reason:ProbeError]}" time="2025-11-05T04:55:02Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:0193112edb namespace:openshift-console node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:downloads-75d55c5477-vjsrv]}" message="{Unhealthy Readiness probe failed: Get \"http://10.130.2.32:8080/\": dial tcp 10.130.2.32:8080: connect: connection refused map[count:3 firstTimestamp:2025-11-05T04:54:58Z lastTimestamp:2025-11-05T04:55:01Z reason:Unhealthy]}" time="2025-11-05T04:55:02Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e9608f4a1b namespace:openshift-console node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:downloads-75d55c5477-vjsrv]}" message="{ProbeError Liveness probe error: Get \"http://10.130.2.32:8080/\": dial tcp 10.130.2.32:8080: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T04:55:01Z lastTimestamp:2025-11-05T04:55:01Z reason:ProbeError]}" time="2025-11-05T04:55:02Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2389633011 namespace:openshift-console node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:downloads-75d55c5477-vjsrv]}" message="{Unhealthy Liveness probe failed: Get \"http://10.130.2.32:8080/\": dial tcp 10.130.2.32:8080: connect: connection refused map[firstTimestamp:2025-11-05T04:55:01Z lastTimestamp:2025-11-05T04:55:01Z reason:Unhealthy]}" time="2025-11-05T04:55:02Z" level=info msg="event interval matches MarketplaceStartupProbeFailure" locator="{Kind map[hmsg:d25e6fe1ef namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:redhat-operators-g8cr7]}" message="{Unhealthy Startup probe failed: timeout: failed to connect service \":50051\" within 1s\n map[firstTimestamp:2025-11-05T04:55:02Z lastTimestamp:2025-11-05T04:55:02Z reason:Unhealthy]}" time="2025-11-05T04:55:03Z" level=info msg="event interval matches MarketplaceStartupProbeFailure" locator="{Kind map[hmsg:d25e6fe1ef namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:redhat-operators-kzb8j]}" message="{Unhealthy Startup probe failed: timeout: failed to connect service \":50051\" within 1s\n map[firstTimestamp:2025-11-05T04:55:03Z lastTimestamp:2025-11-05T04:55:03Z reason:Unhealthy]}" time="2025-11-05T04:55:03Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-oauth-apiserver pod:apiserver-5b4bf4cf7c-kqhd6]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:55:05Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-69c86c487b-f7pzg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T04:55:05Z lastTimestamp:2025-11-05T04:55:05Z reason:Unhealthy]}" time="2025-11-05T04:55:06Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:158d85535e namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-77dcb99c96-4rp8q]}" message="{ProbeError Readiness probe error: Get \"https://10.130.0.74:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.74:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T04:54:56Z lastTimestamp:2025-11-05T04:55:06Z reason:ProbeError]}" time="2025-11-05T04:55:10Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-69c86c487b-f7pzg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T04:55:05Z lastTimestamp:2025-11-05T04:55:10Z reason:Unhealthy]}" time="2025-11-05T04:55:12Z" level=info msg="event interval matches MarketplaceStartupProbeFailure" locator="{Kind map[hmsg:d25e6fe1ef namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:redhat-operators-g8cr7]}" message="{Unhealthy Startup probe failed: timeout: failed to connect service \":50051\" within 1s\n map[count:2 firstTimestamp:2025-11-05T04:55:02Z lastTimestamp:2025-11-05T04:55:12Z reason:Unhealthy]}" time="2025-11-05T04:55:13Z" level=info msg="event interval matches MarketplaceStartupProbeFailure" locator="{Kind map[hmsg:d25e6fe1ef namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:redhat-operators-kzb8j]}" message="{Unhealthy Startup probe failed: timeout: failed to connect service \":50051\" within 1s\n map[count:2 firstTimestamp:2025-11-05T04:55:03Z lastTimestamp:2025-11-05T04:55:13Z reason:Unhealthy]}" time="2025-11-05T04:55:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-69c86c487b-f7pzg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T04:55:05Z lastTimestamp:2025-11-05T04:55:15Z reason:Unhealthy]}" I1105 04:55:19.514978 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T04:55:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-69c86c487b-f7pzg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T04:55:05Z lastTimestamp:2025-11-05T04:55:20Z reason:Unhealthy]}" time="2025-11-05T04:55:22Z" level=info msg="event interval matches MarketplaceStartupProbeFailure" locator="{Kind map[hmsg:d25e6fe1ef namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:redhat-operators-g8cr7]}" message="{Unhealthy Startup probe failed: timeout: failed to connect service \":50051\" within 1s\n map[count:3 firstTimestamp:2025-11-05T04:55:02Z lastTimestamp:2025-11-05T04:55:22Z reason:Unhealthy]}" time="2025-11-05T04:55:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-69c86c487b-f7pzg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T04:55:05Z lastTimestamp:2025-11-05T04:55:25Z reason:Unhealthy]}" time="2025-11-05T04:55:30Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-69c86c487b-f7pzg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T04:55:05Z lastTimestamp:2025-11-05T04:55:30Z reason:Unhealthy]}" time="2025-11-05T04:55:32Z" level=info msg="event interval matches MarketplaceStartupProbeFailure" locator="{Kind map[hmsg:d25e6fe1ef namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:redhat-operators-g8cr7]}" message="{Unhealthy Startup probe failed: timeout: failed to connect service \":50051\" within 1s\n map[count:4 firstTimestamp:2025-11-05T04:55:02Z lastTimestamp:2025-11-05T04:55:32Z reason:Unhealthy]}" time="2025-11-05T04:55:35Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-69c86c487b-f7pzg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T04:55:05Z lastTimestamp:2025-11-05T04:55:35Z reason:Unhealthy]}" time="2025-11-05T04:55:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-69c86c487b-f7pzg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T04:55:05Z lastTimestamp:2025-11-05T04:55:40Z reason:Unhealthy]}" time="2025-11-05T04:55:42Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:158299b7f8 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0_openshift-etcd(9e8e55ac2df71eca97770bd65a66c397) map[firstTimestamp:2025-11-05T04:55:42Z lastTimestamp:2025-11-05T04:55:42Z reason:BackOff]}" time="2025-11-05T04:55:45Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:158299b7f8 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0_openshift-etcd(9e8e55ac2df71eca97770bd65a66c397) map[count:2 firstTimestamp:2025-11-05T04:55:42Z lastTimestamp:2025-11-05T04:55:45Z reason:BackOff]}" time="2025-11-05T04:55:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-69c86c487b-f7pzg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T04:55:05Z lastTimestamp:2025-11-05T04:55:45Z reason:Unhealthy]}" time="2025-11-05T04:55:47Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:158299b7f8 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0_openshift-etcd(9e8e55ac2df71eca97770bd65a66c397) map[count:3 firstTimestamp:2025-11-05T04:55:42Z lastTimestamp:2025-11-05T04:55:47Z reason:BackOff]}" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:45:48.229 [FAILED] in [BeforeEach] - /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44 @ 11/05/25 04:55:48.239 fail [github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44]: Timed out after 600.008s. cluster operators should all be available, not progressing and not degraded Value for field 'Items' failed to satisfy matcher. Expected <[]v1.ClusterOperator | len:34, cap:65>: : { Message: "Cluster operators [authentication control-plane-machine-set csi-snapshot-controller etcd kube-apiserver network olm openshift-apiserver storage] are either not available, are progressing or are degraded.", ClusterOperators: [ { Name: "authentication", Conditions: [ { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:24:22Z, }, Reason: "AsExpected", Message: "OAuthServerDeploymentDegraded: 1 of 4 requested instances are unavailable for oauth-openshift.openshift-authentication ()", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:53:36Z, }, Reason: "APIServerDeployment_PodsUpdating", Message: "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/4 pods have been updated to the latest generation and 3/4 pods are available", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:32:42Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "EvaluationConditionsDetected", Status: "Unknown", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "NoData", Message: "", }, ], }, { Name: "control-plane-machine-set", Conditions: [ { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AsExpected", Message: "cluster operator is upgradable", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AllReplicasAvailable", Message: "", }, { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "AsExpected", Message: "", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "NeedsUpdateReplicas", Message: "Observed 1 replica(s) in need of update", }, ], }, ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output to contain element matching <*matchers.HaveFieldMatcher | 0xc0003da880>: { Field: "Status.Conditions", Expected: <*matchers.AndMatcher | 0xc0002f6b40>{ Matchers: [ <*matchers.ContainElementMatcher | 0xc0002f6930>{ Element: <*matchers.AndMatcher | 0xc0002f6840>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc0003da640>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc00041cc50>{ Expected: "Available", }, }, <*matchers.HaveFieldMatcher | 0xc0003da680>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc00041cc60>{ Expected: "True", }, }, <*matchers.HaveFieldMatcher | 0xc0003da6c0>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc000a88240>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc0002f67e0>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1377790, }, }, ], firstFailedMatcher: nil, }, Result: nil, }, <*matchers.ContainElementMatcher | 0xc0002f69f0>{ Element: <*matchers.AndMatcher | 0xc0002f69c0>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc0003da700>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc00041cc80>{ Expected: "Progressing", }, }, <*matchers.HaveFieldMatcher | 0xc0003da740>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc00041cc90>{ Expected: "False", }, }, <*matchers.HaveFieldMatcher | 0xc0003da780>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc000a88280>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc0002f6960>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1345489, }, }, ], firstFailedMatcher: <*matchers.HaveFieldMatcher | 0xc0003da700>{ Field: "Type", Expected: <*matc... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output failed: (10m0s) 2025-11-05T04:55:48 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 2 is not as expected should not replace the outdated machine" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:45:48.461 [FAILED] in [BeforeEach] - /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44 @ 11/05/25 04:55:48.472 fail [github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44]: Timed out after 600.008s. cluster operators should all be available, not progressing and not degraded Value for field 'Items' failed to satisfy matcher. Expected <[]v1.ClusterOperator | len:34, cap:65>: : { Message: "Cluster operators [authentication control-plane-machine-set csi-snapshot-controller etcd kube-apiserver network olm openshift-apiserver storage] are either not available, are progressing or are degraded.", ClusterOperators: [ { Name: "authentication", Conditions: [ { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:24:22Z, }, Reason: "AsExpected", Message: "OAuthServerDeploymentDegraded: 1 of 4 requested instances are unavailable for oauth-openshift.openshift-authentication ()", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:53:36Z, }, Reason: "APIServerDeployment_PodsUpdating", Message: "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/4 pods have been updated to the latest generation and 3/4 pods are available", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:32:42Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "EvaluationConditionsDetected", Status: "Unknown", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "NoData", Message: "", }, ], }, { Name: "control-plane-machine-set", Conditions: [ { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AsExpected", Message: "cluster operator is upgradable", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AllReplicasAvailable", Message: "", }, { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "AsExpected", Message: "", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:50:07Z, }, Reason: "NeedsUpdateReplicas", Message: "Observed 1 replica(s) in need of update", }, ], }, ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output to contain element matching <*matchers.HaveFieldMatcher | 0xc00047d560>: { Field: "Status.Conditions", Expected: <*matchers.AndMatcher | 0xc00053d3b0>{ Matchers: [ <*matchers.ContainElementMatcher | 0xc00053cfc0>{ Element: <*matchers.AndMatcher | 0xc00053cf90>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc00047d3c0>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc000510a50>{ Expected: "Available", }, }, <*matchers.HaveFieldMatcher | 0xc00047d3e0>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc000510a60>{ Expected: "True", }, }, <*matchers.HaveFieldMatcher | 0xc00047d400>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc0003b5c40>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc00053cf30>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1377950, }, }, ], firstFailedMatcher: nil, }, Result: nil, }, <*matchers.ContainElementMatcher | 0xc00053d1a0>{ Element: <*matchers.AndMatcher | 0xc00053d140>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc00047d420>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc000510a80>{ Expected: "Progressing", }, }, <*matchers.HaveFieldMatcher | 0xc00047d440>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc000510a90>{ Expected: "False", }, }, <*matchers.HaveFieldMatcher | 0xc00047d460>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc0003b5c80>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc00053cff0>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 1345607, }, }, ], firstFailedMatcher: <*matchers.HaveFieldMatcher | 0xc00047d420>{ Field: "Type", Expected: <*matc... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output failed: (10m0s) 2025-11-05T04:55:48 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and the ControlPlaneMachineSet is up to date and the ControlPlaneMachineSet is deleted should uninstall the control plane machine set without control plane machine changes" time="2025-11-05T04:55:50Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-69c86c487b-f7pzg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T04:55:05Z lastTimestamp:2025-11-05T04:55:50Z reason:Unhealthy]}" time="2025-11-05T04:55:53Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-apiserver pod:apiserver-65f46c49b8-z45xg]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:55:55Z" level=info msg="event interval matches ProbeErrorConnectionRefused" locator="{Kind map[hmsg:95ad9219fd namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-69c86c487b-f7pzg]}" message="{ProbeError Readiness probe error: Get \"https://10.130.0.70:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.70:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T04:55:55Z lastTimestamp:2025-11-05T04:55:55Z reason:ProbeError]}" time="2025-11-05T04:55:55Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:9e229721e0 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:apiserver-69c86c487b-f7pzg]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.0.70:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.0.70:8443: connect: connection refused map[firstTimestamp:2025-11-05T04:55:55Z lastTimestamp:2025-11-05T04:55:55Z reason:Unhealthy]}" time="2025-11-05T04:55:56Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-oauth-apiserver pod:apiserver-5b4bf4cf7c-kqhd6]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" I1105 04:56:19.862510 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T04:56:39Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod:kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T04:56:39Z lastTimestamp:2025-11-05T04:56:39Z reason:Unhealthy]}" time="2025-11-05T04:56:39Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:39Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:40Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:41Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:42Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:43Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:44Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:45Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:46Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:47Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:48Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:49Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:50Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:51Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:52Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:53Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:54Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:55Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:56Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:57Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:58Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:56:59Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:00Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:01Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:02Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:03Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:04Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:05Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:06Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:07Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:08Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:09Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:10Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:11Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:12Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:13Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:14Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:15Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:16Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:17Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:18Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:19Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" I1105 04:57:20.132673 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T04:57:20Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:21Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:22Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:23Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:24Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:25Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:26Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:27Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:28Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:29Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:30Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:31Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:32Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:33Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:34Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:35Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:35Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:90ba594177 namespace:openshift-marketplace pod:community-operators-lwl6x]}" message="{FailedScheduling running Bind plugin \"DefaultBinder\": Post \"https://api-int.ci-op-x0f88pwp-f3da4.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lwl6x/binding\": http2: client connection lost map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:57:36Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:37Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:38Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:39Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:40Z" level=info msg="event interval matches PodSandbox" locator="{Kind map[hmsg:e12b6c45c7 namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:community-operators-lwl6x]}" message="{FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-lwl6x_openshift-marketplace_08051820-f630-49b4-bc8c-e58999d552c2_0(a2f2de4b71a3e5ec59d17b8c2d363c980e9b744190daaa9b03d1c254f7b25870): error adding pod openshift-marketplace_community-operators-lwl6x to CNI network \"multus-cni-network\": plugin type=\"multus-shim\" name=\"multus-cni-network\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\"a2f2de4b71a3e5ec59d17b8c2d363c980e9b744190daaa9b03d1c254f7b25870\" Netns:\"/var/run/netns/86d14b1f-c4b1-4436-8585-b5d1b4e262dc\" IfName:\"eth0\" Args:\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-lwl6x;K8S_POD_INFRA_CONTAINER_ID=a2f2de4b71a3e5ec59d17b8c2d363c980e9b744190daaa9b03d1c254f7b25870;K8S_POD_UID=08051820-f630-49b4-bc8c-e58999d552c2\" Path:\"\" ERRORED: error configuring pod [openshift-marketplace/community-operators-lwl6x] networking: Multus: [openshift-marketplace/community-operators-lwl6x/08051820-f630-49b4-bc8c-e58999d552c2]: error waiting for pod: Get \"https://api-int.ci-op-x0f88pwp-f3da4.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lwl6x?timeout=1m0s\": context deadline exceeded\n': StdinData: {\"auxiliaryCNIChainName\":\"vendor-cni-chain\",\"binDir\":\"/var/lib/cni/bin\",\"clusterNetwork\":\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\",\"cniVersion\":\"0.3.1\",\"daemonSocketDir\":\"/run/multus/socket\",\"globalNamespaces\":\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\",\"logLevel\":\"verbose\",\"logToStderr\":true,\"name\":\"multus-cni-network\",\"namespaceIsolation\":true,\"type\":\"multus-shim\"} map[firstTimestamp:2025-11-05T04:57:40Z lastTimestamp:2025-11-05T04:57:40Z reason:FailedCreatePodSandBox]}" time="2025-11-05T04:57:40Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:41Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:42Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:43Z" level=info msg="event interval matches PodSandbox" locator="{Kind map[hmsg:3423a1e17e namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:community-operators-lwl6x]}" message="{FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-lwl6x_openshift-marketplace_08051820-f630-49b4-bc8c-e58999d552c2_0(3f499b5b82fbca1adad7d3e6347962fd7093ed7451ffe8cd899e8591b7002ad7): error adding pod openshift-marketplace_community-operators-lwl6x to CNI network \"multus-cni-network\": plugin type=\"multus-shim\" name=\"multus-cni-network\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\"3f499b5b82fbca1adad7d3e6347962fd7093ed7451ffe8cd899e8591b7002ad7\" Netns:\"/var/run/netns/61aad246-9ee2-4906-aaa1-fab3f58ecfe0\" IfName:\"eth0\" Args:\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-lwl6x;K8S_POD_INFRA_CONTAINER_ID=3f499b5b82fbca1adad7d3e6347962fd7093ed7451ffe8cd899e8591b7002ad7;K8S_POD_UID=08051820-f630-49b4-bc8c-e58999d552c2\" Path:\"\" ERRORED: error configuring pod [openshift-marketplace/community-operators-lwl6x] networking: Multus: [openshift-marketplace/community-operators-lwl6x/08051820-f630-49b4-bc8c-e58999d552c2]: error waiting for pod: Get \"https://api-int.ci-op-x0f88pwp-f3da4.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-lwl6x?timeout=1m0s\": context deadline exceeded\n': StdinData: {\"auxiliaryCNIChainName\":\"vendor-cni-chain\",\"binDir\":\"/var/lib/cni/bin\",\"clusterNetwork\":\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\",\"cniVersion\":\"0.3.1\",\"daemonSocketDir\":\"/run/multus/socket\",\"globalNamespaces\":\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\",\"logLevel\":\"verbose\",\"logToStderr\":true,\"name\":\"multus-cni-network\",\"namespaceIsolation\":true,\"type\":\"multus-shim\"} map[firstTimestamp:2025-11-05T04:57:43Z lastTimestamp:2025-11-05T04:57:43Z reason:FailedCreatePodSandBox]}" time="2025-11-05T04:57:43Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:44Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:45Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:46Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:46Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-b74br]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:57:46Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-controller-manager pod:controller-manager-6848447799-p7xgz]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:57:46Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-t6n7p]}" message="{NodeNotReady Node is not ready map[firstTimestamp:2025-11-05T04:57:46Z lastTimestamp:2025-11-05T04:57:46Z reason:NodeNotReady]}" time="2025-11-05T04:57:46Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-apiserver pod:apiserver-65f46c49b8-z45xg]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:57:46Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-psldw]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:57:46Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-oauth-apiserver pod:apiserver-5b4bf4cf7c-kqhd6]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:57:47Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:48Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:49Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:50Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:51Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:52Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:57:53Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" I1105 04:58:20.368138 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T04:58:24Z" level=error msg="pod logged an error: Get \"https://10.0.0.6:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0/etcd?follow=true×tamps=true\": dial tcp 10.0.0.6:10250: i/o timeout" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:24Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:25e49def44 namespace:openshift-etcd service:etcd]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-etcd/etcd: skipping Pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-etcd/etcd: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:24Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:24Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:24Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:80d5092721 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:24Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:24Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:eddc00c159 namespace:openshift-kube-controller-manager service:kube-controller-manager]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-controller-manager/kube-controller-manager: skipping Pod kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-controller-manager/kube-controller-manager: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:24Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:24Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:de3d648a32 namespace:openshift-kube-scheduler service:scheduler]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-scheduler/scheduler: skipping Pod openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-scheduler/scheduler: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:24Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:25Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:25Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:25e49def44 namespace:openshift-etcd service:etcd]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-etcd/etcd: skipping Pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-etcd/etcd: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:2 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:25Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:25Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:80d5092721 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:2 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:25Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:25Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:eddc00c159 namespace:openshift-kube-controller-manager service:kube-controller-manager]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-controller-manager/kube-controller-manager: skipping Pod kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-controller-manager/kube-controller-manager: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:2 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:25Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:25Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:de3d648a32 namespace:openshift-kube-scheduler service:scheduler]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-scheduler/scheduler: skipping Pod openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-scheduler/scheduler: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:2 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:25Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:26Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:27Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-69c86c487b-bkvkf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T04:58:27Z lastTimestamp:2025-11-05T04:58:27Z reason:Unhealthy]}" time="2025-11-05T04:58:27Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:25e49def44 namespace:openshift-etcd service:etcd]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-etcd/etcd: skipping Pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-etcd/etcd: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:3 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:27Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:27Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:80d5092721 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:3 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:27Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:27Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:eddc00c159 namespace:openshift-kube-controller-manager service:kube-controller-manager]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-controller-manager/kube-controller-manager: skipping Pod kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-controller-manager/kube-controller-manager: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:3 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:27Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:27Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:de3d648a32 namespace:openshift-kube-scheduler service:scheduler]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-scheduler/scheduler: skipping Pod openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-scheduler/scheduler: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:3 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:27Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:28Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:29Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:29Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-77dcb99c96-p26vp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T04:58:29Z lastTimestamp:2025-11-05T04:58:29Z reason:Unhealthy]}" time="2025-11-05T04:58:30Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:31Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:31Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:25e49def44 namespace:openshift-etcd service:etcd]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-etcd/etcd: skipping Pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-etcd/etcd: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:4 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:31Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:31Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:80d5092721 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:4 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:31Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:31Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:eddc00c159 namespace:openshift-kube-controller-manager service:kube-controller-manager]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-controller-manager/kube-controller-manager: skipping Pod kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-controller-manager/kube-controller-manager: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:4 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:31Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:31Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:de3d648a32 namespace:openshift-kube-scheduler service:scheduler]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-scheduler/scheduler: skipping Pod openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-scheduler/scheduler: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:4 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:31Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:32Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-69c86c487b-bkvkf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T04:58:27Z lastTimestamp:2025-11-05T04:58:32Z reason:Unhealthy]}" time="2025-11-05T04:58:33Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:34Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:34Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-77dcb99c96-p26vp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T04:58:29Z lastTimestamp:2025-11-05T04:58:34Z reason:Unhealthy]}" time="2025-11-05T04:58:35Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:36Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:37Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:37Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-69c86c487b-bkvkf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T04:58:27Z lastTimestamp:2025-11-05T04:58:37Z reason:Unhealthy]}" time="2025-11-05T04:58:38Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:39Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:39Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:25e49def44 namespace:openshift-etcd service:etcd]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-etcd/etcd: skipping Pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-etcd/etcd: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:5 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:39Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:39Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:80d5092721 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:5 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:39Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:39Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:eddc00c159 namespace:openshift-kube-controller-manager service:kube-controller-manager]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-controller-manager/kube-controller-manager: skipping Pod kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-controller-manager/kube-controller-manager: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:5 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:39Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:39Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:de3d648a32 namespace:openshift-kube-scheduler service:scheduler]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-scheduler/scheduler: skipping Pod openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-scheduler/scheduler: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:5 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:39Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:39Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-77dcb99c96-p26vp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T04:58:29Z lastTimestamp:2025-11-05T04:58:39Z reason:Unhealthy]}" time="2025-11-05T04:58:40Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:41Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:42Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:42Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-69c86c487b-bkvkf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T04:58:27Z lastTimestamp:2025-11-05T04:58:42Z reason:Unhealthy]}" time="2025-11-05T04:58:43Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:44Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:44Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-77dcb99c96-p26vp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T04:58:29Z lastTimestamp:2025-11-05T04:58:44Z reason:Unhealthy]}" time="2025-11-05T04:58:45Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:46Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:47Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:47Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-69c86c487b-bkvkf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T04:58:27Z lastTimestamp:2025-11-05T04:58:47Z reason:Unhealthy]}" time="2025-11-05T04:58:48Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:49Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:49Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-77dcb99c96-p26vp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T04:58:29Z lastTimestamp:2025-11-05T04:58:49Z reason:Unhealthy]}" time="2025-11-05T04:58:50Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:51Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:52Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-69c86c487b-bkvkf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T04:58:27Z lastTimestamp:2025-11-05T04:58:52Z reason:Unhealthy]}" time="2025-11-05T04:58:53Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:54Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:54Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-77dcb99c96-p26vp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T04:58:29Z lastTimestamp:2025-11-05T04:58:54Z reason:Unhealthy]}" time="2025-11-05T04:58:55Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:55Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:25e49def44 namespace:openshift-etcd service:etcd]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-etcd/etcd: skipping Pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-etcd/etcd: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:6 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:55Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:55Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:80d5092721 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:6 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:55Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:55Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:eddc00c159 namespace:openshift-kube-controller-manager service:kube-controller-manager]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-controller-manager/kube-controller-manager: skipping Pod kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-controller-manager/kube-controller-manager: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:6 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:55Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:55Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:de3d648a32 namespace:openshift-kube-scheduler service:scheduler]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-scheduler/scheduler: skipping Pod openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-0 for Service openshift-kube-scheduler/scheduler: Node ci-op-x0f88pwp-f3da4-d9fgd-master-0 Not Found map[count:6 firstTimestamp:2025-11-05T04:58:24Z lastTimestamp:2025-11-05T04:58:55Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T04:58:56Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:57Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-69c86c487b-bkvkf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T04:58:27Z lastTimestamp:2025-11-05T04:58:57Z reason:Unhealthy]}" time="2025-11-05T04:58:58Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:59Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:58:59Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-77dcb99c96-p26vp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T04:58:29Z lastTimestamp:2025-11-05T04:58:59Z reason:Unhealthy]}" time="2025-11-05T04:59:00Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:01Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:02Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:02Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-69c86c487b-bkvkf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T04:58:27Z lastTimestamp:2025-11-05T04:59:02Z reason:Unhealthy]}" time="2025-11-05T04:59:03Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:04Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:04Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-77dcb99c96-p26vp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T04:58:29Z lastTimestamp:2025-11-05T04:59:04Z reason:Unhealthy]}" time="2025-11-05T04:59:05Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:06Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:07Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:07Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-69c86c487b-bkvkf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T04:58:27Z lastTimestamp:2025-11-05T04:59:07Z reason:Unhealthy]}" time="2025-11-05T04:59:08Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:09Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:09Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-77dcb99c96-p26vp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T04:58:29Z lastTimestamp:2025-11-05T04:59:09Z reason:Unhealthy]}" time="2025-11-05T04:59:10Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:11Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:12Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:12Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-69c86c487b-bkvkf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T04:58:27Z lastTimestamp:2025-11-05T04:59:12Z reason:Unhealthy]}" time="2025-11-05T04:59:13Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:14Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:14Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-77dcb99c96-p26vp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T04:58:29Z lastTimestamp:2025-11-05T04:59:14Z reason:Unhealthy]}" time="2025-11-05T04:59:15Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:16Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-0\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-0 uid/dab79c1b-e224-4c39-88dd-df3ef102cf34 container/etcd mirror-uid/9e8e55ac2df71eca97770bd65a66c397" time="2025-11-05T04:59:19Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:52fecf7a7a namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-77dcb99c96-p26vp]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.15:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.15:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T04:59:19Z lastTimestamp:2025-11-05T04:59:19Z reason:ProbeError]}" time="2025-11-05T04:59:19Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2211f0ccc2 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-77dcb99c96-p26vp]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.15:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.15:8443: connect: connection refused map[firstTimestamp:2025-11-05T04:59:19Z lastTimestamp:2025-11-05T04:59:19Z reason:Unhealthy]}" I1105 04:59:20.684800 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T04:59:22Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[firstTimestamp:2025-11-05T04:59:22Z lastTimestamp:2025-11-05T04:59:22Z reason:ProbeError]}" time="2025-11-05T04:59:22Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:16 firstTimestamp:2025-11-05T04:31:14Z lastTimestamp:2025-11-05T04:59:22Z reason:Unhealthy]}" time="2025-11-05T04:59:23Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-69c86c487b-k9skq]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T04:59:23Z lastTimestamp:2025-11-05T04:59:23Z reason:Unhealthy]}" time="2025-11-05T04:59:23Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-5b4bf4cf7c-bm9hr]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T04:59:24Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:52fecf7a7a namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-77dcb99c96-p26vp]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.15:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.15:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T04:59:19Z lastTimestamp:2025-11-05T04:59:24Z reason:ProbeError]}" time="2025-11-05T04:59:24Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:2211f0ccc2 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-77dcb99c96-p26vp]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.15:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.15:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T04:59:19Z lastTimestamp:2025-11-05T04:59:24Z reason:Unhealthy]}" time="2025-11-05T04:59:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:8bdd4442bd namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-69c86c487b-bkvkf]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.71:8443/readyz?exclude=etcd&exclude=etcd-readiness\": context deadline exceeded map[firstTimestamp:2025-11-05T04:59:27Z lastTimestamp:2025-11-05T04:59:27Z reason:Unhealthy]}" time="2025-11-05T04:59:27Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:2 firstTimestamp:2025-11-05T04:59:22Z lastTimestamp:2025-11-05T04:59:27Z reason:ProbeError]}" time="2025-11-05T04:59:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:17 firstTimestamp:2025-11-05T04:31:14Z lastTimestamp:2025-11-05T04:59:27Z reason:Unhealthy]}" time="2025-11-05T04:59:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-69c86c487b-k9skq]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T04:59:23Z lastTimestamp:2025-11-05T04:59:28Z reason:Unhealthy]}" time="2025-11-05T04:59:29Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:52fecf7a7a namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-77dcb99c96-p26vp]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.15:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.15:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T04:59:19Z lastTimestamp:2025-11-05T04:59:29Z reason:ProbeError]}" time="2025-11-05T04:59:32Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:3 firstTimestamp:2025-11-05T04:59:22Z lastTimestamp:2025-11-05T04:59:32Z reason:ProbeError]}" time="2025-11-05T04:59:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-69c86c487b-k9skq]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T04:59:23Z lastTimestamp:2025-11-05T04:59:33Z reason:Unhealthy]}" time="2025-11-05T04:59:38Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-69c86c487b-k9skq]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T04:59:23Z lastTimestamp:2025-11-05T04:59:38Z reason:Unhealthy]}" time="2025-11-05T04:59:43Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-69c86c487b-k9skq]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T04:59:23Z lastTimestamp:2025-11-05T04:59:43Z reason:Unhealthy]}" time="2025-11-05T04:59:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-69c86c487b-k9skq]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T04:59:23Z lastTimestamp:2025-11-05T04:59:48Z reason:Unhealthy]}" time="2025-11-05T04:59:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-69c86c487b-k9skq]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T04:59:23Z lastTimestamp:2025-11-05T04:59:53Z reason:Unhealthy]}" time="2025-11-05T04:59:53Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:b38c18a8f9 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.3:9980/readyz\": dial tcp 10.0.0.3:9980: connect: connection refused\nbody: \n map[count:32 firstTimestamp:2025-11-05T04:15:38Z lastTimestamp:2025-11-05T04:59:53Z reason:ProbeError]}" time="2025-11-05T04:59:53Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:0eea7d995d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.3:9980/readyz\": dial tcp 10.0.0.3:9980: connect: connection refused map[count:32 firstTimestamp:2025-11-05T04:15:38Z lastTimestamp:2025-11-05T04:59:53Z reason:Unhealthy]}" time="2025-11-05T04:59:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-69c86c487b-k9skq]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T04:59:23Z lastTimestamp:2025-11-05T04:59:58Z reason:Unhealthy]}" time="2025-11-05T04:59:58Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:b38c18a8f9 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.3:9980/readyz\": dial tcp 10.0.0.3:9980: connect: connection refused\nbody: \n map[count:33 firstTimestamp:2025-11-05T04:15:38Z lastTimestamp:2025-11-05T04:59:58Z reason:ProbeError]}" time="2025-11-05T04:59:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:0eea7d995d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.3:9980/readyz\": dial tcp 10.0.0.3:9980: connect: connection refused map[count:33 firstTimestamp:2025-11-05T04:15:38Z lastTimestamp:2025-11-05T04:59:58Z reason:Unhealthy]}" time="2025-11-05T05:00:03Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-69c86c487b-k9skq]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T04:59:23Z lastTimestamp:2025-11-05T05:00:03Z reason:Unhealthy]}" time="2025-11-05T05:00:03Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:b38c18a8f9 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.3:9980/readyz\": dial tcp 10.0.0.3:9980: connect: connection refused\nbody: \n map[count:34 firstTimestamp:2025-11-05T04:15:38Z lastTimestamp:2025-11-05T05:00:03Z reason:ProbeError]}" time="2025-11-05T05:00:03Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:0eea7d995d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.3:9980/readyz\": dial tcp 10.0.0.3:9980: connect: connection refused map[count:34 firstTimestamp:2025-11-05T04:15:38Z lastTimestamp:2025-11-05T05:00:03Z reason:Unhealthy]}" time="2025-11-05T05:00:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-69c86c487b-k9skq]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T04:59:23Z lastTimestamp:2025-11-05T05:00:08Z reason:Unhealthy]}" time="2025-11-05T05:00:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-69c86c487b-k9skq]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:11 firstTimestamp:2025-11-05T04:59:23Z lastTimestamp:2025-11-05T05:00:13Z reason:Unhealthy]}" I1105 05:00:20.970802 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:00:21Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:21Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:22Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:23Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:24Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:25Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:26Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-65f46c49b8-4frl5]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:00:26Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:27Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:28Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:29Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:30Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:30Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-77dcb99c96-4qlzz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:00:30Z lastTimestamp:2025-11-05T05:00:30Z reason:Unhealthy]}" time="2025-11-05T05:00:31Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:32Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:33Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:34Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:35Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/6c5eaf08-b4a6-4b45-93c2-98a290cab56f container/etcd mirror-uid/f2242914c2a824c79abb3069d04976e0" time="2025-11-05T05:00:35Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-77dcb99c96-4qlzz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:00:30Z lastTimestamp:2025-11-05T05:00:35Z reason:Unhealthy]}" time="2025-11-05T05:00:35Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:00:36Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:00:37Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:00:38Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:00:39Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:00:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-77dcb99c96-4qlzz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:00:30Z lastTimestamp:2025-11-05T05:00:40Z reason:Unhealthy]}" time="2025-11-05T05:00:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-77dcb99c96-4qlzz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:00:30Z lastTimestamp:2025-11-05T05:00:45Z reason:Unhealthy]}" time="2025-11-05T05:00:50Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-77dcb99c96-4qlzz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:00:30Z lastTimestamp:2025-11-05T05:00:50Z reason:Unhealthy]}" time="2025-11-05T05:00:55Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-77dcb99c96-4qlzz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:00:30Z lastTimestamp:2025-11-05T05:00:55Z reason:Unhealthy]}" time="2025-11-05T05:01:00Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-77dcb99c96-4qlzz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:00:30Z lastTimestamp:2025-11-05T05:01:00Z reason:Unhealthy]}" time="2025-11-05T05:01:05Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-77dcb99c96-4qlzz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:00:30Z lastTimestamp:2025-11-05T05:01:05Z reason:Unhealthy]}" time="2025-11-05T05:01:10Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-77dcb99c96-4qlzz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:00:30Z lastTimestamp:2025-11-05T05:01:10Z reason:Unhealthy]}" time="2025-11-05T05:01:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-77dcb99c96-4qlzz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:00:30Z lastTimestamp:2025-11-05T05:01:15Z reason:Unhealthy]}" time="2025-11-05T05:01:20Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d76917fa5c namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-77dcb99c96-4qlzz]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.70:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.70:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:01:20Z lastTimestamp:2025-11-05T05:01:20Z reason:ProbeError]}" time="2025-11-05T05:01:20Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4efb402155 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-77dcb99c96-4qlzz]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.70:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.70:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:01:20Z lastTimestamp:2025-11-05T05:01:20Z reason:Unhealthy]}" I1105 05:01:21.226907 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:01:25Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d76917fa5c namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-77dcb99c96-4qlzz]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.70:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.70:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:01:20Z lastTimestamp:2025-11-05T05:01:25Z reason:ProbeError]}" time="2025-11-05T05:01:25Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4efb402155 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-77dcb99c96-4qlzz]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.70:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.70:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:01:20Z lastTimestamp:2025-11-05T05:01:25Z reason:Unhealthy]}" time="2025-11-05T05:01:30Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d76917fa5c namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-77dcb99c96-4qlzz]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.70:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.70:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:01:20Z lastTimestamp:2025-11-05T05:01:30Z reason:ProbeError]}" time="2025-11-05T05:01:56Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:a3480c389e namespace:openshift-apiserver pod:apiserver-65f46c49b8-4frl5]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:01:56Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:a3480c389e namespace:openshift-oauth-apiserver pod:apiserver-5b4bf4cf7c-nr8qt]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:01:56Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:a3480c389e namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-ctj6d]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:01:56Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-oauth-apiserver pod:apiserver-5b4bf4cf7c-nr8qt]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:01:56Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-ctj6d]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:01:56Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-apiserver pod:apiserver-65f46c49b8-4frl5]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:01:56Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-b74br]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:01:56Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-controller-manager pod:controller-manager-6848447799-p7xgz]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:01:56Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-apiserver pod:apiserver-65f46c49b8-xqnn6]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:01:57Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[daemonset:loki-promtail hmsg:4fac8fab4b namespace:openshift-e2e-loki]}" message="{SuccessfulCreate Created pod: loki-promtail-xxlpr map[firstTimestamp:2025-11-05T05:01:56Z lastTimestamp:2025-11-05T05:01:56Z reason:SuccessfulCreate]}" time="2025-11-05T05:01:57Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:d7a3e81ba8 namespace:openshift-e2e-loki pod:loki-promtail-xxlpr]}" message="{Scheduled Successfully assigned openshift-e2e-loki/loki-promtail-xxlpr to ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:Scheduled]}" time="2025-11-05T05:01:57Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-apiserver pod:apiserver-65f46c49b8-xqnn6]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:01:57Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-ctj6d]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:01:57Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-apiserver pod:apiserver-65f46c49b8-4frl5]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:01:57Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-controller-manager pod:controller-manager-6848447799-p7xgz]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:01:57Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-oauth-apiserver pod:apiserver-5b4bf4cf7c-nr8qt]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:01:57Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-b74br]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:01:59Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:01:59Z reason:NetworkNotReady]}" time="2025-11-05T05:01:59Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:3b9414159c namespace:openshift-machine-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-rbac-proxy-crio-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{BackOff Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1_openshift-machine-config-operator(f0d7a48012a2283b4a6b947333a2e106) map[count:4 firstTimestamp:2025-11-05T05:01:32Z lastTimestamp:2025-11-05T05:01:59Z reason:BackOff]}" time="2025-11-05T05:01:59Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:01:59Z reason:FailedMount]}" time="2025-11-05T05:01:59Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:01:59Z reason:FailedMount]}" time="2025-11-05T05:01:59Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:01:59Z reason:FailedMount]}" time="2025-11-05T05:01:59Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:a5cff9f100 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-sc9gm\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:01:59Z reason:FailedMount]}" time="2025-11-05T05:01:59Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:01:59Z reason:FailedMount]}" time="2025-11-05T05:01:59Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:01:59Z reason:FailedMount]}" time="2025-11-05T05:01:59Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:01:59Z reason:FailedMount]}" time="2025-11-05T05:01:59Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:a5cff9f100 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-sc9gm\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:01:59Z reason:FailedMount]}" time="2025-11-05T05:02:00Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:00Z reason:FailedMount]}" time="2025-11-05T05:02:00Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:00Z reason:FailedMount]}" time="2025-11-05T05:02:00Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:00Z reason:FailedMount]}" time="2025-11-05T05:02:01Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:a5cff9f100 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-sc9gm\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:00Z reason:FailedMount]}" time="2025-11-05T05:02:01Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-ctj6d]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:02:01Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-apiserver pod:apiserver-65f46c49b8-xqnn6]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:02:01Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-oauth-apiserver pod:apiserver-5b4bf4cf7c-nr8qt]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:02:01Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-controller-manager pod:controller-manager-6848447799-p7xgz]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:02:01Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-apiserver pod:apiserver-65f46c49b8-4frl5]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:02:01Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:0207c627d1 namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-b74br]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:02:01Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:01Z reason:NetworkNotReady]}" time="2025-11-05T05:02:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:02Z reason:FailedMount]}" time="2025-11-05T05:02:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:02Z reason:FailedMount]}" time="2025-11-05T05:02:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:02Z reason:FailedMount]}" time="2025-11-05T05:02:03Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:a5cff9f100 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-sc9gm\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:02Z reason:FailedMount]}" time="2025-11-05T05:02:03Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:03Z reason:NetworkNotReady]}" time="2025-11-05T05:02:05Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:05Z reason:NetworkNotReady]}" time="2025-11-05T05:02:05Z" level=info msg="event interval matches CertificateRotation" locator="{Kind map[deployment:etcd-operator hmsg:2de7883628 namespace:openshift-etcd-operator]}" message="{TargetUpdateRequired \"etcd-peer-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" in \"openshift-etcd\" requires a new target cert/key pair: secret doesn't exist map[firstTimestamp:2025-11-05T05:02:05Z interesting:true lastTimestamp:2025-11-05T05:02:05Z reason:TargetUpdateRequired]}" time="2025-11-05T05:02:06Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:06Z reason:FailedMount]}" time="2025-11-05T05:02:06Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:06Z reason:FailedMount]}" time="2025-11-05T05:02:07Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:06Z reason:FailedMount]}" time="2025-11-05T05:02:07Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:a5cff9f100 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-sc9gm\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:07Z reason:FailedMount]}" time="2025-11-05T05:02:07Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T05:01:59Z lastTimestamp:2025-11-05T05:02:07Z reason:NetworkNotReady]}" time="2025-11-05T05:02:12Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[count:19 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T05:02:12Z reason:ProbeError]}" time="2025-11-05T05:02:12Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:feccdf558f namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused map[count:19 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T05:02:12Z reason:Unhealthy]}" time="2025-11-05T05:02:17Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[count:20 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T05:02:17Z reason:ProbeError]}" time="2025-11-05T05:02:17Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:feccdf558f namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused map[count:20 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T05:02:17Z reason:Unhealthy]}" I1105 05:02:21.554571 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:02:22Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[count:21 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T05:02:22Z reason:ProbeError]}" time="2025-11-05T05:02:24Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:31 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T05:02:24Z reason:ProbeError]}" time="2025-11-05T05:02:24Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:63 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T05:02:24Z reason:Unhealthy]}" time="2025-11-05T05:02:29Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:32 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T05:02:29Z reason:ProbeError]}" time="2025-11-05T05:02:29Z" level=info msg="event interval matches CertificateRotation" locator="{Kind map[certificatesigningrequest:csr-hl8w5 hmsg:b0bffdffdf]}" message="{CSRApproved CSR \"csr-hl8w5\" has been approved map[firstTimestamp:2025-11-05T05:02:29Z interesting:true lastTimestamp:2025-11-05T05:02:29Z reason:CSRApproved]}" time="2025-11-05T05:02:29Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:64 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T05:02:29Z reason:Unhealthy]}" time="2025-11-05T05:02:33Z" level=info msg="event interval matches AnnotationChangeTooOften" locator="{Kind map[hmsg:2028063899 machineconfigpool:master namespace:openshift-machine-config-operator]}" message="{AnnotationChange Node ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 now has machineconfiguration.openshift.io/currentConfig=rendered-master-9f98a746a10e4a27be194b3256575bcc map[firstTimestamp:2025-11-05T05:02:33Z lastTimestamp:2025-11-05T05:02:33Z reason:AnnotationChange]}" time="2025-11-05T05:02:33Z" level=info msg="event interval matches AnnotationChangeTooOften" locator="{Kind map[hmsg:11f6f89cca machineconfigpool:master namespace:openshift-machine-config-operator]}" message="{AnnotationChange Node ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-9f98a746a10e4a27be194b3256575bcc map[firstTimestamp:2025-11-05T05:02:33Z lastTimestamp:2025-11-05T05:02:33Z reason:AnnotationChange]}" time="2025-11-05T05:02:33Z" level=info msg="event interval matches AnnotationChangeTooOften" locator="{Kind map[hmsg:03cc4b0ff5 machineconfigpool:master namespace:openshift-machine-config-operator]}" message="{AnnotationChange Node ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 now has machineconfiguration.openshift.io/state=Done map[firstTimestamp:2025-11-05T05:02:33Z lastTimestamp:2025-11-05T05:02:33Z reason:AnnotationChange]}" time="2025-11-05T05:02:34Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:33 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T05:02:34Z reason:ProbeError]}" time="2025-11-05T05:02:35Z" level=info msg="event interval matches CertificateRotation" locator="{Kind map[certificatesigningrequest:csr-8h67v hmsg:e428bbcf3c]}" message="{CSRApproved CSR \"csr-8h67v\" has been approved map[firstTimestamp:2025-11-05T05:02:35Z interesting:true lastTimestamp:2025-11-05T05:02:35Z reason:CSRApproved]}" time="2025-11-05T05:02:39Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/53bfe9f3-c7c9-4212-baca-1732e5ac74c2 container/etcd mirror-uid/b94435cecf8447c225823c6cf50b44a8" time="2025-11-05T05:02:39Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/53bfe9f3-c7c9-4212-baca-1732e5ac74c2 container/etcd mirror-uid/b94435cecf8447c225823c6cf50b44a8" time="2025-11-05T05:02:39Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:16a53d669c namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:gcp-pd-csi-driver-node-lwv26]}" message="{ProbeError Liveness probe error: Get \"http://10.0.0.8:10300/healthz\": dial tcp 10.0.0.8:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:02:39Z lastTimestamp:2025-11-05T05:02:39Z reason:ProbeError]}" time="2025-11-05T05:02:39Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d413bafd64 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:gcp-pd-csi-driver-node-lwv26]}" message="{ProbeError Liveness probe error: Get \"http://10.0.0.8:10303/healthz\": dial tcp 10.0.0.8:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:02:39Z lastTimestamp:2025-11-05T05:02:39Z reason:ProbeError]}" time="2025-11-05T05:02:39Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:97f4669386 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:gcp-pd-csi-driver-node-lwv26]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.0.8:10303/healthz\": dial tcp 10.0.0.8:10303: connect: connection refused map[firstTimestamp:2025-11-05T05:02:39Z lastTimestamp:2025-11-05T05:02:39Z reason:Unhealthy]}" time="2025-11-05T05:02:39Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d96fd79506 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:gcp-pd-csi-driver-node-lwv26]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.0.8:10300/healthz\": dial tcp 10.0.0.8:10300: connect: connection refused map[firstTimestamp:2025-11-05T05:02:39Z lastTimestamp:2025-11-05T05:02:39Z reason:Unhealthy]}" time="2025-11-05T05:02:40Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/53bfe9f3-c7c9-4212-baca-1732e5ac74c2 container/etcd mirror-uid/b94435cecf8447c225823c6cf50b44a8" time="2025-11-05T05:02:41Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/53bfe9f3-c7c9-4212-baca-1732e5ac74c2 container/etcd mirror-uid/b94435cecf8447c225823c6cf50b44a8" time="2025-11-05T05:02:42Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/53bfe9f3-c7c9-4212-baca-1732e5ac74c2 container/etcd mirror-uid/b94435cecf8447c225823c6cf50b44a8" time="2025-11-05T05:02:43Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/53bfe9f3-c7c9-4212-baca-1732e5ac74c2 container/etcd mirror-uid/b94435cecf8447c225823c6cf50b44a8" time="2025-11-05T05:02:44Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/53bfe9f3-c7c9-4212-baca-1732e5ac74c2 container/etcd mirror-uid/b94435cecf8447c225823c6cf50b44a8" time="2025-11-05T05:02:45Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/53bfe9f3-c7c9-4212-baca-1732e5ac74c2 container/etcd mirror-uid/b94435cecf8447c225823c6cf50b44a8" time="2025-11-05T05:02:46Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/53bfe9f3-c7c9-4212-baca-1732e5ac74c2 container/etcd mirror-uid/b94435cecf8447c225823c6cf50b44a8" time="2025-11-05T05:02:47Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/53bfe9f3-c7c9-4212-baca-1732e5ac74c2 container/etcd mirror-uid/b94435cecf8447c225823c6cf50b44a8" time="2025-11-05T05:02:48Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/53bfe9f3-c7c9-4212-baca-1732e5ac74c2 container/etcd mirror-uid/b94435cecf8447c225823c6cf50b44a8" time="2025-11-05T05:02:49Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/53bfe9f3-c7c9-4212-baca-1732e5ac74c2 container/etcd mirror-uid/b94435cecf8447c225823c6cf50b44a8" time="2025-11-05T05:02:50Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/53bfe9f3-c7c9-4212-baca-1732e5ac74c2 container/etcd mirror-uid/b94435cecf8447c225823c6cf50b44a8" time="2025-11-05T05:02:51Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/53bfe9f3-c7c9-4212-baca-1732e5ac74c2 container/etcd mirror-uid/b94435cecf8447c225823c6cf50b44a8" time="2025-11-05T05:02:51Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/9d91aee8-d3a6-4dc8-b9ee-2f0b9a240901 container/etcd mirror-uid/80f360cab9756b30f9be446cbdb0a1b0" time="2025-11-05T05:02:52Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/9d91aee8-d3a6-4dc8-b9ee-2f0b9a240901 container/etcd mirror-uid/80f360cab9756b30f9be446cbdb0a1b0" time="2025-11-05T05:02:53Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/9d91aee8-d3a6-4dc8-b9ee-2f0b9a240901 container/etcd mirror-uid/80f360cab9756b30f9be446cbdb0a1b0" time="2025-11-05T05:02:54Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:c28577f968 namespace:openshift-apiserver pod:apiserver-65f46c49b8-pzwpp]}" message="{FailedScheduling 0/7 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 4 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 Preemption is not helpful for scheduling, 4 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:02:54Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-77dcb99c96-qz8dc]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:02:54Z lastTimestamp:2025-11-05T05:02:54Z reason:Unhealthy]}" time="2025-11-05T05:02:54Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/9d91aee8-d3a6-4dc8-b9ee-2f0b9a240901 container/etcd mirror-uid/80f360cab9756b30f9be446cbdb0a1b0" time="2025-11-05T05:02:55Z" level=info msg="event interval matches AnnotationChangeTooOften" locator="{Kind map[hmsg:ac455cdff5 machineconfigpool:master namespace:openshift-machine-config-operator]}" message="{AnnotationChange Node ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 now has machineconfiguration.openshift.io/reason= map[firstTimestamp:2025-11-05T05:02:55Z lastTimestamp:2025-11-05T05:02:55Z reason:AnnotationChange]}" time="2025-11-05T05:02:55Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/9d91aee8-d3a6-4dc8-b9ee-2f0b9a240901 container/etcd mirror-uid/80f360cab9756b30f9be446cbdb0a1b0" time="2025-11-05T05:02:59Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-77dcb99c96-qz8dc]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:02:54Z lastTimestamp:2025-11-05T05:02:59Z reason:Unhealthy]}" time="2025-11-05T05:03:02Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:83f021c4c2 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:10 firstTimestamp:2025-11-05T04:17:58Z lastTimestamp:2025-11-05T05:03:02Z reason:ProbeError]}" time="2025-11-05T05:03:03Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b4128814e2 namespace:openshift-e2e-loki pod:loki-promtail-xxlpr]}" message="{AddedInterface Add eth0 [10.131.2.8/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T05:03:03Z lastTimestamp:2025-11-05T05:03:03Z reason:AddedInterface]}" time="2025-11-05T05:03:03Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:d1eb5763af namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{Pulling Pulling image \"quay.io/openshift-logging/promtail:v2.9.8\" map[container:promtail firstTimestamp:2025-11-05T05:03:03Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T05:03:03Z reason:Pulling]}" time="2025-11-05T05:03:04Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-77dcb99c96-qz8dc]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:02:54Z lastTimestamp:2025-11-05T05:03:04Z reason:Unhealthy]}" time="2025-11-05T05:03:09Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-77dcb99c96-qz8dc]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:02:54Z lastTimestamp:2025-11-05T05:03:09Z reason:Unhealthy]}" time="2025-11-05T05:03:12Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:8d0b664b68 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{Pulled Successfully pulled image \"quay.io/openshift-logging/promtail:v2.9.8\" in 9.242s (9.242s including waiting). Image size: 478481622 bytes. map[container:promtail firstTimestamp:2025-11-05T05:03:12Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T05:03:12Z reason:Pulled]}" time="2025-11-05T05:03:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T05:03:13Z lastTimestamp:2025-11-05T05:03:13Z reason:Created]}" time="2025-11-05T05:03:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T05:03:13Z lastTimestamp:2025-11-05T05:03:13Z reason:Started]}" time="2025-11-05T05:03:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:6bd083e00c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{Pulling Pulling image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" map[container:oauth-proxy firstTimestamp:2025-11-05T05:03:13Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T05:03:13Z reason:Pulling]}" time="2025-11-05T05:03:14Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-77dcb99c96-qz8dc]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:02:54Z lastTimestamp:2025-11-05T05:03:14Z reason:Unhealthy]}" time="2025-11-05T05:03:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:9e797132b3 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{Pulled Successfully pulled image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" in 4.891s (4.891s including waiting). Image size: 482442792 bytes. map[container:oauth-proxy firstTimestamp:2025-11-05T05:03:18Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T05:03:18Z reason:Pulled]}" time="2025-11-05T05:03:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T05:03:18Z lastTimestamp:2025-11-05T05:03:18Z reason:Created]}" time="2025-11-05T05:03:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T05:03:18Z lastTimestamp:2025-11-05T05:03:18Z reason:Started]}" time="2025-11-05T05:03:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T05:03:18Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T05:03:18Z reason:Pulling]}" time="2025-11-05T05:03:19Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:80e9e1175e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 991ms (991ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T05:03:19Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T05:03:19Z reason:Pulled]}" time="2025-11-05T05:03:19Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T05:03:19Z lastTimestamp:2025-11-05T05:03:19Z reason:Created]}" time="2025-11-05T05:03:19Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:loki-promtail-xxlpr]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T05:03:19Z lastTimestamp:2025-11-05T05:03:19Z reason:Started]}" time="2025-11-05T05:03:19Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-77dcb99c96-qz8dc]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:02:54Z lastTimestamp:2025-11-05T05:03:19Z reason:Unhealthy]}" I1105 05:03:21.842011 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:03:24Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-77dcb99c96-qz8dc]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:02:54Z lastTimestamp:2025-11-05T05:03:24Z reason:Unhealthy]}" time="2025-11-05T05:03:29Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-77dcb99c96-qz8dc]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:02:54Z lastTimestamp:2025-11-05T05:03:29Z reason:Unhealthy]}" time="2025-11-05T05:03:34Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-77dcb99c96-qz8dc]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:02:54Z lastTimestamp:2025-11-05T05:03:34Z reason:Unhealthy]}" time="2025-11-05T05:03:39Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-77dcb99c96-qz8dc]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:02:54Z lastTimestamp:2025-11-05T05:03:39Z reason:Unhealthy]}" time="2025-11-05T05:03:44Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4047eb12db namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-77dcb99c96-qz8dc]}" message="{ProbeError Readiness probe error: Get \"https://10.128.0.79:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.79:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:03:44Z lastTimestamp:2025-11-05T05:03:44Z reason:ProbeError]}" time="2025-11-05T05:03:44Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:3a0b45a93b namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-77dcb99c96-qz8dc]}" message="{Unhealthy Readiness probe failed: Get \"https://10.128.0.79:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.79:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:03:44Z lastTimestamp:2025-11-05T05:03:44Z reason:Unhealthy]}" time="2025-11-05T05:03:49Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4047eb12db namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-77dcb99c96-qz8dc]}" message="{ProbeError Readiness probe error: Get \"https://10.128.0.79:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.79:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:03:44Z lastTimestamp:2025-11-05T05:03:49Z reason:ProbeError]}" time="2025-11-05T05:03:49Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:3a0b45a93b namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-77dcb99c96-qz8dc]}" message="{Unhealthy Readiness probe failed: Get \"https://10.128.0.79:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.79:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:03:44Z lastTimestamp:2025-11-05T05:03:49Z reason:Unhealthy]}" time="2025-11-05T05:03:54Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4047eb12db namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-77dcb99c96-qz8dc]}" message="{ProbeError Readiness probe error: Get \"https://10.128.0.79:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.128.0.79:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:03:44Z lastTimestamp:2025-11-05T05:03:54Z reason:ProbeError]}" time="2025-11-05T05:04:00Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-apiserver pod:apiserver-65f46c49b8-pzwpp]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:04:01Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-controller-manager pod:controller-manager-6848447799-9dq2c]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:04:01Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-zqfrv]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:04:02Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-cts8l]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:04:02Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-oauth-apiserver pod:apiserver-5b4bf4cf7c-bvmk5]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:04:02Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:174b8e3d6d namespace:openshift-operator-lifecycle-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:olm-operator-6596dc66ff-r6ctv]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.49:8443/healthz\": dial tcp 10.130.2.49:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:04:02Z lastTimestamp:2025-11-05T05:04:02Z reason:ProbeError]}" time="2025-11-05T05:04:02Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:200d75ce03 namespace:openshift-operator-lifecycle-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:olm-operator-6596dc66ff-r6ctv]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.49:8443/healthz\": dial tcp 10.130.2.49:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:04:02Z lastTimestamp:2025-11-05T05:04:02Z reason:Unhealthy]}" time="2025-11-05T05:04:03Z" level=info msg="event interval matches ProbeErrorConnectionRefused" locator="{Kind map[hmsg:c3780193dc namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:openshift-config-operator-69bc6697c9-l44zx]}" message="{ProbeError Readiness probe error: Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T04:04:22Z lastTimestamp:2025-11-05T05:04:02Z reason:ProbeError]}" time="2025-11-05T05:04:03Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:7ad48a109b namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:openshift-config-operator-69bc6697c9-l44zx]}" message="{Unhealthy Readiness probe failed: Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused map[count:5 firstTimestamp:2025-11-05T04:04:22Z lastTimestamp:2025-11-05T05:04:02Z reason:Unhealthy]}" time="2025-11-05T05:04:03Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:95555fe25e namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.3:9980/readyz\": context deadline exceeded\nbody: \n map[count:12 firstTimestamp:2025-11-05T04:14:08Z lastTimestamp:2025-11-05T05:04:02Z reason:ProbeError]}" time="2025-11-05T05:04:03Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:594af5559f namespace:openshift-operator-lifecycle-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:package-server-manager-6cfb5fcd44-kd57k]}" message="{ProbeError Readiness probe error: Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:04:03Z lastTimestamp:2025-11-05T05:04:03Z reason:ProbeError]}" time="2025-11-05T05:04:03Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:3219d86e42 namespace:openshift-operator-lifecycle-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:package-server-manager-6cfb5fcd44-kd57k]}" message="{Unhealthy Readiness probe failed: Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused map[firstTimestamp:2025-11-05T05:04:03Z lastTimestamp:2025-11-05T05:04:03Z reason:Unhealthy]}" time="2025-11-05T05:04:04Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:ed0fac845a namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:console-operator-589679b99d-b4p6t]}" message="{ProbeError Readiness probe error: Get \"https://10.128.0.28:8443/readyz\": dial tcp 10.128.0.28:8443: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T04:09:53Z lastTimestamp:2025-11-05T05:04:04Z reason:ProbeError]}" time="2025-11-05T05:04:04Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:9516051592 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:console-operator-589679b99d-b4p6t]}" message="{Unhealthy Readiness probe failed: Get \"https://10.128.0.28:8443/readyz\": dial tcp 10.128.0.28:8443: connect: connection refused map[count:4 firstTimestamp:2025-11-05T04:09:53Z lastTimestamp:2025-11-05T05:04:04Z reason:Unhealthy]}" time="2025-11-05T05:04:05Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-controller-manager pod:controller-manager-6848447799-9dq2c]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:04:06Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-zqfrv]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:04:07Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-5b4bf4cf7c-bm9hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:04:07Z lastTimestamp:2025-11-05T05:04:07Z reason:Unhealthy]}" time="2025-11-05T05:04:12Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-5b4bf4cf7c-bm9hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:04:07Z lastTimestamp:2025-11-05T05:04:12Z reason:Unhealthy]}" I1105 05:04:17.662903 1669 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" type="*v1.Event" err="Internal error occurred: etcdserver: no leader" I1105 05:04:22.104217 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' E1105 05:05:15.791739 1669 pod_log_streamer.go:94] "Unhandled Error" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" I1105 05:05:18.767308 1669 trace.go:236] Trace[1843052142]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290 (05-Nov-2025 05:04:18.759) (total time: 60007ms): Trace[1843052142]: ---"Objects listed" error:the server was unable to return a response in the time allotted, but may still be processing the request (get events) 60007ms (05:05:18.767) Trace[1843052142]: [1m0.007808522s] [1m0.007808522s] END E1105 05:05:18.767381 1669 reflector.go:205] "Failed to watch" err="failed to list *v1.Event: the server was unable to return a response in the time allotted, but may still be processing the request (get events)" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" type="*v1.Event" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:54:47.86 [FAILED] in [BeforeEach] - /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44 @ 11/05/25 05:05:19.493 fail [github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44]: Timed out after 631.631s. cluster operators should all be available, not progressing and not degraded The function passed to Eventually returned the following error: <*errors.StatusError | 0xc000726640>: the server was unable to return a response in the time allotted, but may still be processing the request (get clusteroperators.config.openshift.io) { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server was unable to return a response in the time allotted, but may still be processing the request (get clusteroperators.config.openshift.io)", Reason: "Timeout", Details: { Name: "", Group: "config.openshift.io", Kind: "clusteroperators", UID: "", Causes: [ { Type: "UnexpectedServerResponse", Message: "{\"metadata\":{},\"status\":\"Failure\",\"message\":\"Timeout: request did not complete within the allotted timeout\",\"reason\":\"Timeout\",\"details\":{},\"code\":504}", Field: "", }, ], RetryAfterSeconds: 0, }, Code: 504, }, } At one point, however, the function did return successfully. Yet, Eventually failed because the matcher was not satisfied: Value for field 'Items' failed to satisfy matcher. Expected <[]v1.ClusterOperator | len:34, cap:65>: : { Message: "Cluster operators [control-plane-machine-set etcd kube-apiserver kube-controller-manager kube-scheduler network openshift-apiserver storage] are either not available, are progressing or are degraded.", ClusterOperators: [ { Name: "control-plane-machine-set", Conditions: [ { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AsExpected", Message: "cluster operator is upgradable", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:04:56Z, }, Reason: "AllReplicasAvailable", Message: "", }, { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T05:01:56Z, }, Reason: "AsExpected", Message: "", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T05:01:56Z, }, Reason: "ExcessReplicas", Message: "Waiting for 1 old replica(s) to be removed", }, ], }, { Name: "etcd", Conditions: [ { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:58:36Z, }, Reason: "AsExpected", Message: "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nGuardControllerDegraded: Missing operand on node ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-x0f88pwp-f3da4-d9fgd-master-2 is unhealthy", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:50:13Z, }, Reason: "NodeInstaller", Message: "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 8; 1 node is at revision 10; 1 node is at revision 13; 0 nodes have achieved new revision 15", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:13:04Z, }, Reason: "AsExpected", Message: "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 0; 1 node is at revision 8; 1 node is at revision 10; 1 node is at revision 13; 0 nodes have achieved new revision 15\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-x0f88pwp-f3da4-d9fgd-master-2 is unhealthy", }, { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "AsExpected", Me... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output to contain element matching <*matchers.HaveFieldMatcher | 0xc0002a6e00>: { Field: "Status.Conditions", Expected: <*matchers.AndMatcher | 0xc0006213b0>{ Matchers: [ <*matchers.ContainElementMatcher | 0xc0006211d0>{ Element: <*matchers.AndMatcher | 0xc0006211a0>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc0002a6ce0>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc00071d0e0>{ Expected: "Available", }, }, <*matchers.HaveFieldMatcher | 0xc0002a6d00>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc00071d0f0>{ Expected: "True", }, }, <*matchers.HaveFieldMatcher | 0xc0002a6d20>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc000817240>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc000621140>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 3623491, }, }, ], firstFailedMatcher: nil, }, Result: nil, }, <*matchers.ContainElementMatcher | 0xc000621290>{ Element: <*matchers.AndMatcher | 0xc000621260>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc0002a6d40>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc00071d110>{ Expected: "Progressing", }, }, <*matchers.HaveFieldMatcher | 0xc0002a6d60>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc00071d120>{ Expected: "False", }, }, <*matchers.HaveFieldMatcher | 0xc0002a6d80>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc000817280>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc000621200>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 2055491, }, }, ], firstFailedMatcher: <*matchers.HaveFieldMatcher | 0xc0002a6d60>{ Field: "Status", Expected: <*ma... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output failed: (10m32s) 2025-11-05T05:05:19 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet with the OnDelete update strategy and ControlPlaneMachineSet is updated to set MachineNamePrefix [OCPFeatureGate:CPMSMachineNamePrefix] and the provider spec of index 2 is not as expected and again MachineNamePrefix is reset should not replace the outdated machine" I1105 05:05:22.260057 1669 client.go:1078] Error running oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all: StdOut> Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterversions.config.openshift.io version) StdErr> Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterversions.config.openshift.io version) I1105 05:05:22.260261 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' E1105 05:06:15.795168 1669 pod_log_streamer.go:94] "Unhandled Error" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" I1105 05:06:21.013986 1669 trace.go:236] Trace[714512444]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290 (05-Nov-2025 05:05:21.009) (total time: 60004ms): Trace[714512444]: ---"Objects listed" error:the server was unable to return a response in the time allotted, but may still be processing the request (get events) 60004ms (05:06:21.013) Trace[714512444]: [1m0.004852002s] [1m0.004852002s] END E1105 05:06:21.014068 1669 reflector.go:205] "Failed to watch" err="failed to list *v1.Event: the server was unable to return a response in the time allotted, but may still be processing the request (get events)" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" type="*v1.Event" I1105 05:06:22.418688 1669 client.go:1078] Error running oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all: StdOut> Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterversions.config.openshift.io version) StdErr> Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterversions.config.openshift.io version) E1105 05:06:35.780584 1669 pod_log_streamer.go:94] "Unhandled Error" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1)" I1105 05:06:56.970702 1669 trace.go:236] Trace[1276196679]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290 (05-Nov-2025 05:06:27.117) (total time: 29852ms): Trace[1276196679]: ---"Objects listed" error: 29850ms (05:06:56.967) Trace[1276196679]: ---"Resource version extracted" 0ms (05:06:56.967) Trace[1276196679]: ---"Objects extracted" 0ms (05:06:56.967) Trace[1276196679]: ---"SyncWith done" 2ms (05:06:56.970) Trace[1276196679]: ---"Resource version updated" 0ms (05:06:56.970) Trace[1276196679]: [29.852957168s] [29.852957168s] END time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-5b4bf4cf7c-bm9hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:04:07Z lastTimestamp:2025-11-05T05:04:17Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-5b4bf4cf7c-bm9hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:04:07Z lastTimestamp:2025-11-05T05:04:22Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-5b4bf4cf7c-bm9hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:04:07Z lastTimestamp:2025-11-05T05:04:27Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-5b4bf4cf7c-bm9hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:04:07Z lastTimestamp:2025-11-05T05:04:32Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-5b4bf4cf7c-bm9hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:04:07Z lastTimestamp:2025-11-05T05:04:37Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:bf86b4c932 namespace:openshift-operator-lifecycle-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:package-server-manager-6cfb5fcd44-s6665]}" message="{ProbeError Readiness probe error: Get \"http://10.130.2.48:8080/healthz\": dial tcp 10.130.2.48:8080: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:06:54Z lastTimestamp:2025-11-05T05:06:54Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:edf74f435c namespace:openshift-operator-lifecycle-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:package-server-manager-6cfb5fcd44-s6665]}" message="{Unhealthy Readiness probe failed: Get \"http://10.130.2.48:8080/healthz\": dial tcp 10.130.2.48:8080: connect: connection refused map[firstTimestamp:2025-11-05T05:06:54Z lastTimestamp:2025-11-05T05:06:54Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:a4f9849247 namespace:openshift-machine-api node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:machine-api-controllers-dd746fdf5-nwjfg]}" message="{ProbeError Startup probe error: Get \"http://10.129.0.21:9441/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:6 firstTimestamp:2025-11-05T04:09:18Z lastTimestamp:2025-11-05T05:06:18Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-5b4bf4cf7c-bm9hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:04:07Z lastTimestamp:2025-11-05T05:04:42Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:94465d6756 namespace:openshift-machine-api node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:machine-api-controllers-dd746fdf5-nwjfg]}" message="{Unhealthy Startup probe failed: Get \"http://10.129.0.21:9441/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers) map[count:6 firstTimestamp:2025-11-05T04:09:18Z lastTimestamp:2025-11-05T05:06:18Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-5b4bf4cf7c-bm9hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:04:07Z lastTimestamp:2025-11-05T05:04:47Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:a4f9849247 namespace:openshift-machine-api node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:machine-api-controllers-dd746fdf5-nwjfg]}" message="{ProbeError Startup probe error: Get \"http://10.129.0.21:9441/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:7 firstTimestamp:2025-11-05T04:09:18Z lastTimestamp:2025-11-05T05:06:28Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:94465d6756 namespace:openshift-machine-api node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:machine-api-controllers-dd746fdf5-nwjfg]}" message="{Unhealthy Startup probe failed: Get \"http://10.129.0.21:9441/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers) map[count:7 firstTimestamp:2025-11-05T04:09:18Z lastTimestamp:2025-11-05T05:06:28Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-5b4bf4cf7c-bm9hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:04:07Z lastTimestamp:2025-11-05T05:04:52Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-5b4bf4cf7c-bm9hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:11 firstTimestamp:2025-11-05T05:04:07Z lastTimestamp:2025-11-05T05:04:57Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:a4f9849247 namespace:openshift-machine-api node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:machine-api-controllers-dd746fdf5-nwjfg]}" message="{ProbeError Startup probe error: Get \"http://10.129.0.21:9441/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:8 firstTimestamp:2025-11-05T04:09:18Z lastTimestamp:2025-11-05T05:06:38Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:apiserver-5b4bf4cf7c-bm9hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:12 firstTimestamp:2025-11-05T05:04:07Z lastTimestamp:2025-11-05T05:05:02Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:94465d6756 namespace:openshift-machine-api node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:machine-api-controllers-dd746fdf5-nwjfg]}" message="{Unhealthy Startup probe failed: Get \"http://10.129.0.21:9441/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers) map[count:8 firstTimestamp:2025-11-05T04:09:18Z lastTimestamp:2025-11-05T05:06:38Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:d3e991580a namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1_openshift-etcd(1374c23603b9826b929123fe721a00ce) map[firstTimestamp:2025-11-05T05:05:35Z lastTimestamp:2025-11-05T05:05:35Z reason:BackOff]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:d3e991580a namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1_openshift-etcd(1374c23603b9826b929123fe721a00ce) map[count:2 firstTimestamp:2025-11-05T05:05:35Z lastTimestamp:2025-11-05T05:05:36Z reason:BackOff]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:108c13e933 namespace:openshift-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:controller-manager-6848447799-5685f]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nbody: \n map[firstTimestamp:2025-11-05T05:06:43Z lastTimestamp:2025-11-05T05:06:43Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:2ab717151e namespace:openshift-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:controller-manager-6848447799-5685f]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) map[firstTimestamp:2025-11-05T05:06:43Z lastTimestamp:2025-11-05T05:06:43Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:d3e991580a namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1_openshift-etcd(1374c23603b9826b929123fe721a00ce) map[count:3 firstTimestamp:2025-11-05T05:05:35Z lastTimestamp:2025-11-05T05:05:38Z reason:BackOff]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:108c13e933 namespace:openshift-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:controller-manager-6848447799-5685f]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:06:43Z lastTimestamp:2025-11-05T05:06:44Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:2ab717151e namespace:openshift-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:controller-manager-6848447799-5685f]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) map[count:2 firstTimestamp:2025-11-05T05:06:43Z lastTimestamp:2025-11-05T05:06:44Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2363bb7230 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{ProbeError Liveness probe error: Get \"https://10.0.0.3:10259/healthz\": dial tcp 10.0.0.3:10259: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:06:54Z lastTimestamp:2025-11-05T05:06:54Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:fe27218063 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{Unhealthy Liveness probe failed: Get \"https://10.0.0.3:10259/healthz\": dial tcp 10.0.0.3:10259: connect: connection refused map[firstTimestamp:2025-11-05T05:06:54Z lastTimestamp:2025-11-05T05:06:54Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2976e363a4 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:04:32Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:e6879c25f1 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:04:32Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2976e363a4 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:04:33Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:a4f9849247 namespace:openshift-machine-api node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:machine-api-controllers-dd746fdf5-nwjfg]}" message="{ProbeError Startup probe error: Get \"http://10.129.0.21:9441/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:9 firstTimestamp:2025-11-05T04:09:18Z lastTimestamp:2025-11-05T05:06:48Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:e6879c25f1 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:04:33Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:94465d6756 namespace:openshift-machine-api node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:machine-api-controllers-dd746fdf5-nwjfg]}" message="{Unhealthy Startup probe failed: Get \"http://10.129.0.21:9441/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers) map[count:9 firstTimestamp:2025-11-05T04:09:18Z lastTimestamp:2025-11-05T05:06:48Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:4ee35e80fc namespace:openshift-cluster-storage-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:csi-snapshot-controller-8c7f869b5-hfm7w]}" message="{BackOff Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-8c7f869b5-hfm7w_openshift-cluster-storage-operator(c5baa1a7-0c7e-4f42-82c9-ac9286af4074) map[firstTimestamp:2025-11-05T05:06:48Z lastTimestamp:2025-11-05T05:06:48Z reason:BackOff]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:6c34c36370 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.49:8443/healthz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:04:34Z lastTimestamp:2025-11-05T05:04:34Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:cadbff4a67 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.49:8443/healthz\": dial tcp 10.131.2.49:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:04:34Z lastTimestamp:2025-11-05T05:04:34Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d34c2ed85a namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:04:41Z lastTimestamp:2025-11-05T05:04:41Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:582d213baf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:04:41Z lastTimestamp:2025-11-05T05:04:41Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ProbeErrorConnectionRefused" locator="{Kind map[hmsg:49078f4b39 namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:04:42Z lastTimestamp:2025-11-05T05:04:42Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:576a6317bf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:04:42Z lastTimestamp:2025-11-05T05:04:42Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-apiserver pod:apiserver-65f46c49b8-pzwpp]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-cts8l]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-zqfrv]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-controller-manager pod:controller-manager-6848447799-9dq2c]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-oauth-apiserver pod:apiserver-5b4bf4cf7c-bvmk5]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2976e363a4 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:04:43Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:e6879c25f1 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused map[count:3 firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:04:43Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d34c2ed85a namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:04:41Z lastTimestamp:2025-11-05T05:04:44Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:582d213baf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:04:41Z lastTimestamp:2025-11-05T05:04:44Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:6c34c36370 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.49:8443/healthz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:04:34Z lastTimestamp:2025-11-05T05:04:44Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:cadbff4a67 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.49:8443/healthz\": dial tcp 10.131.2.49:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:04:34Z lastTimestamp:2025-11-05T05:04:44Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:49078f4b39 namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:04:42Z lastTimestamp:2025-11-05T05:04:45Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:576a6317bf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:04:42Z lastTimestamp:2025-11-05T05:04:45Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d34c2ed85a namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:04:41Z lastTimestamp:2025-11-05T05:04:47Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:582d213baf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[count:3 firstTimestamp:2025-11-05T05:04:41Z lastTimestamp:2025-11-05T05:04:47Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:49078f4b39 namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:04:42Z lastTimestamp:2025-11-05T05:04:47Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:576a6317bf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[count:3 firstTimestamp:2025-11-05T05:04:42Z lastTimestamp:2025-11-05T05:04:47Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d34c2ed85a namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T05:04:41Z lastTimestamp:2025-11-05T05:04:50Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:582d213baf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[count:4 firstTimestamp:2025-11-05T05:04:41Z lastTimestamp:2025-11-05T05:04:50Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ProbeErrorConnectionRefused" locator="{Kind map[hmsg:49078f4b39 namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T05:04:42Z lastTimestamp:2025-11-05T05:04:52Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:576a6317bf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[count:4 firstTimestamp:2025-11-05T05:04:42Z lastTimestamp:2025-11-05T05:04:52Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d34c2ed85a namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T05:04:41Z lastTimestamp:2025-11-05T05:04:53Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:582d213baf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[count:5 firstTimestamp:2025-11-05T05:04:41Z lastTimestamp:2025-11-05T05:04:53Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2976e363a4 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:04:53Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e6879c25f1 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused map[count:4 firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:04:53Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:6c34c36370 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.49:8443/healthz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:04:34Z lastTimestamp:2025-11-05T05:04:54Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:cadbff4a67 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.49:8443/healthz\": dial tcp 10.131.2.49:8443: connect: connection refused map[count:3 firstTimestamp:2025-11-05T05:04:34Z lastTimestamp:2025-11-05T05:04:54Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2976e363a4 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:04:54Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:e6879c25f1 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused map[count:5 firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:04:54Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ProbeErrorConnectionRefused" locator="{Kind map[hmsg:49078f4b39 namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T05:04:42Z lastTimestamp:2025-11-05T05:04:55Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:576a6317bf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[count:5 firstTimestamp:2025-11-05T05:04:42Z lastTimestamp:2025-11-05T05:04:55Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2976e363a4 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:04:55Z reason:ProbeError]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:e6879c25f1 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused map[count:6 firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:04:55Z reason:Unhealthy]}" time="2025-11-05T05:06:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d34c2ed85a namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T05:04:41Z lastTimestamp:2025-11-05T05:04:56Z reason:ProbeError]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:582d213baf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[count:6 firstTimestamp:2025-11-05T05:04:41Z lastTimestamp:2025-11-05T05:04:56Z reason:Unhealthy]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:49078f4b39 namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T05:04:42Z lastTimestamp:2025-11-05T05:04:56Z reason:ProbeError]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:576a6317bf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[count:6 firstTimestamp:2025-11-05T05:04:42Z lastTimestamp:2025-11-05T05:04:56Z reason:Unhealthy]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2976e363a4 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[count:7 firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:04:56Z reason:ProbeError]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:e6879c25f1 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused map[count:7 firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:04:56Z reason:Unhealthy]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d34c2ed85a namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:7 firstTimestamp:2025-11-05T05:04:41Z lastTimestamp:2025-11-05T05:04:59Z reason:ProbeError]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:247a206f9e namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:05:02Z reason:ProbeError]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a6aa2ad388 namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:05:02Z reason:Unhealthy]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:6c34c36370 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.49:8443/healthz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T05:04:34Z lastTimestamp:2025-11-05T05:05:04Z reason:ProbeError]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:cadbff4a67 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.49:8443/healthz\": dial tcp 10.131.2.49:8443: connect: connection refused map[count:4 firstTimestamp:2025-11-05T05:04:34Z lastTimestamp:2025-11-05T05:05:04Z reason:Unhealthy]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2976e363a4 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[count:8 firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:05:06Z reason:ProbeError]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:e6879c25f1 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused map[count:8 firstTimestamp:2025-11-05T05:04:32Z lastTimestamp:2025-11-05T05:05:06Z reason:Unhealthy]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:247a206f9e namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:05:12Z reason:ProbeError]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a6aa2ad388 namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:05:12Z reason:Unhealthy]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:247a206f9e namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:05:22Z reason:ProbeError]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a6aa2ad388 namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused map[count:3 firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:05:22Z reason:Unhealthy]}" time="2025-11-05T05:06:58Z" level=info msg="event interval matches PodSandbox" locator="{Kind map[hmsg:c708bb5a9f namespace:openshift-kube-controller-manager-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-operator-7d9bbc89cd-hgkwr]}" message="{FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-operator-7d9bbc89cd-hgkwr_openshift-kube-controller-manager-operator_e073643f-30d7-450b-bd03-c97ae3ca0f7c_0(e79fad882a90028e6bb0c94c168b53575fc0b0dfee4cf21ae7d29926e2d9d371): error adding pod openshift-kube-controller-manager-operator_kube-controller-manager-operator-7d9bbc89cd-hgkwr to CNI network \"multus-cni-network\": plugin type=\"multus-shim\" name=\"multus-cni-network\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\"e79fad882a90028e6bb0c94c168b53575fc0b0dfee4cf21ae7d29926e2d9d371\" Netns:\"/var/run/netns/3df1ede2-3c72-4f3a-898a-f7720eafd04b\" IfName:\"eth0\" Args:\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-7d9bbc89cd-hgkwr;K8S_POD_INFRA_CONTAINER_ID=e79fad882a90028e6bb0c94c168b53575fc0b0dfee4cf21ae7d29926e2d9d371;K8S_POD_UID=e073643f-30d7-450b-bd03-c97ae3ca0f7c\" Path:\"\" ERRORED: error configuring pod [openshift-kube-controller-manager-operator/kube-controller-manager-operator-7d9bbc89cd-hgkwr] networking: Multus: [openshift-kube-controller-manager-operator/kube-controller-manager-operator-7d9bbc89cd-hgkwr/e073643f-30d7-450b-bd03-c97ae3ca0f7c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod kube-controller-manager-operator-7d9bbc89cd-hgkwr in out of cluster comm: SetNetworkStatus: failed to update the pod kube-controller-manager-operator-7d9bbc89cd-hgkwr in out of cluster comm: status update failed for pod /: Get \"https://api-int.ci-op-x0f88pwp-f3da4.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7d9bbc89cd-hgkwr?timeout=1m0s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n': StdinData: {\"auxiliaryCNIChainName\":\"vendor-cni-chain\",\"binDir\":\"/var/lib/cni/bin\",\"clusterNetwork\":\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\",\"cniVersion\":\"0.3.1\",\"daemonSocketDir\":\"/run/multus/socket\",\"globalNamespaces\":\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\",\"logLevel\":\"verbose\",\"logToStderr\":true,\"name\":\"multus-cni-network\",\"namespaceIsolation\":true,\"type\":\"multus-shim\"} map[firstTimestamp:2025-11-05T05:05:25Z lastTimestamp:2025-11-05T05:05:25Z reason:FailedCreatePodSandBox]}" time="2025-11-05T05:06:59Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:247a206f9e namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:05:52Z reason:ProbeError]}" time="2025-11-05T05:06:59Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a6aa2ad388 namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused map[count:4 firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:05:52Z reason:Unhealthy]}" time="2025-11-05T05:06:59Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:247a206f9e namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:06:02Z reason:ProbeError]}" time="2025-11-05T05:06:59Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a6aa2ad388 namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused map[count:5 firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:06:02Z reason:Unhealthy]}" time="2025-11-05T05:06:59Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:247a206f9e namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:06:12Z reason:ProbeError]}" time="2025-11-05T05:06:59Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a6aa2ad388 namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused map[count:6 firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:06:12Z reason:Unhealthy]}" time="2025-11-05T05:06:59Z" level=info msg="event interval matches PodSandbox" locator="{Kind map[hmsg:ba3a0a8de1 namespace:openshift-kube-controller-manager-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-operator-7d9bbc89cd-hgkwr]}" message="{FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-operator-7d9bbc89cd-hgkwr_openshift-kube-controller-manager-operator_e073643f-30d7-450b-bd03-c97ae3ca0f7c_0(5d88318349fef51f5d76f3dee8343934566e5eefa7887e738b866546c459520b): error adding pod openshift-kube-controller-manager-operator_kube-controller-manager-operator-7d9bbc89cd-hgkwr to CNI network \"multus-cni-network\": plugin type=\"multus-shim\" name=\"multus-cni-network\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\"5d88318349fef51f5d76f3dee8343934566e5eefa7887e738b866546c459520b\" Netns:\"/var/run/netns/ad060e2b-6956-47ea-a05e-6b9eb38f761b\" IfName:\"eth0\" Args:\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-7d9bbc89cd-hgkwr;K8S_POD_INFRA_CONTAINER_ID=5d88318349fef51f5d76f3dee8343934566e5eefa7887e738b866546c459520b;K8S_POD_UID=e073643f-30d7-450b-bd03-c97ae3ca0f7c\" Path:\"\" ERRORED: error configuring pod [openshift-kube-controller-manager-operator/kube-controller-manager-operator-7d9bbc89cd-hgkwr] networking: Multus: [openshift-kube-controller-manager-operator/kube-controller-manager-operator-7d9bbc89cd-hgkwr/e073643f-30d7-450b-bd03-c97ae3ca0f7c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod kube-controller-manager-operator-7d9bbc89cd-hgkwr in out of cluster comm: SetNetworkStatus: failed to update the pod kube-controller-manager-operator-7d9bbc89cd-hgkwr in out of cluster comm: status update failed for pod /: Get \"https://api-int.ci-op-x0f88pwp-f3da4.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-7d9bbc89cd-hgkwr?timeout=1m0s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n': StdinData: {\"auxiliaryCNIChainName\":\"vendor-cni-chain\",\"binDir\":\"/var/lib/cni/bin\",\"clusterNetwork\":\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\",\"cniVersion\":\"0.3.1\",\"daemonSocketDir\":\"/run/multus/socket\",\"globalNamespaces\":\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\",\"logLevel\":\"verbose\",\"logToStderr\":true,\"name\":\"multus-cni-network\",\"namespaceIsolation\":true,\"type\":\"multus-shim\"} map[firstTimestamp:2025-11-05T05:06:26Z lastTimestamp:2025-11-05T05:06:26Z reason:FailedCreatePodSandBox]}" time="2025-11-05T05:06:59Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:bab48a2339 namespace:openshift-cloud-network-config-controller node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:cloud-network-config-controller-594bb6bf45-57tjr]}" message="{BackOff Back-off restarting failed container controller in pod cloud-network-config-controller-594bb6bf45-57tjr_openshift-cloud-network-config-controller(23b74885-a455-4733-85fa-47020c37abd2) map[firstTimestamp:2025-11-05T05:06:31Z lastTimestamp:2025-11-05T05:06:31Z reason:BackOff]}" time="2025-11-05T05:06:59Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:f9b7b13437 namespace:openshift-machine-api node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:cluster-baremetal-operator-5f697474c6-h5nph]}" message="{BackOff Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-5f697474c6-h5nph_openshift-machine-api(cbf3021a-2688-46b0-bfd3-eee18c3154eb) map[firstTimestamp:2025-11-05T05:06:32Z lastTimestamp:2025-11-05T05:06:32Z reason:BackOff]}" time="2025-11-05T05:06:59Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:247a206f9e namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused\nbody: \n map[count:7 firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:06:42Z reason:ProbeError]}" time="2025-11-05T05:06:59Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a6aa2ad388 namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused map[count:7 firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:06:42Z reason:Unhealthy]}" time="2025-11-05T05:07:00Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:247a206f9e namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused\nbody: \n map[count:8 firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:06:52Z reason:ProbeError]}" time="2025-11-05T05:07:00Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a6aa2ad388 namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused map[count:8 firstTimestamp:2025-11-05T05:05:02Z lastTimestamp:2025-11-05T05:06:52Z reason:Unhealthy]}" E1105 05:07:09.080604 1669 pod_ip_controller.go:75] "Unhandled Error" err=< invalid queue key '{openshift-oauth-apiserver/apiserver-5b4bf4cf7c-bm9hr &Pod{ObjectMeta:{apiserver-5b4bf4cf7c-bm9hr apiserver-5b4bf4cf7c- openshift-oauth-apiserver 1c599460-2e33-4379-aa05-7781836d8ada 49554 2 2025-11-05 04:59:23 +0000 UTC 2025-11-05 05:06:00 +0000 UTC 0xc004e69598 map[apiserver:true app:openshift-oauth-apiserver oauth-apiserver-anti-affinity:true pod-template-hash:5b4bf4cf7c revision:1] map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.128.0.96/23"],"mac_address":"0a:58:0a:80:00:60","gateway_ips":["10.128.0.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.128.0.1"},{"dest":"172.30.0.0/16","nextHop":"10.128.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.128.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.128.0.1"}],"ip_address":"10.128.0.96/23","gateway_ip":"10.128.0.1","role":"primary"}} k8s.v1.cni.cncf.io/network-status:[{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.128.0.96" ], "mac": "0a:58:0a:80:00:60", "default": true, "dns": {} }] openshift.io/required-scc:privileged openshift.io/scc:privileged operator.openshift.io/dep-openshift-oauth-apiserver.etcd-client.secret:odMusQ== operator.openshift.io/dep-openshift-oauth-apiserver.etcd-serving-ca.configmap:bod41Q== security.openshift.io/validated-scc-subject-type:serviceaccount] [{apps/v1 ReplicaSet apiserver-5b4bf4cf7c cdf264ea-d365-43ec-b44b-302e9d8d3198 0xc004e696c7 0xc004e696c8}] [] [{kube-controller-manager Update v1 2025-11-05 04:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:openshift.io/required-scc":{},"f:operator.openshift.io/dep-openshift-oauth-apiserver.etcd-client.secret":{},"f:operator.openshift.io/dep-openshift-oauth-apiserver.etcd-serving-ca.configmap":{},"f:target.workload.openshift.io/management":{}},"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:oauth-apiserver-anti-affinity":{},"f:pod-template-hash":{},"f:revision":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdf264ea-d365-43ec-b44b-302e9d8d3198\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"oauth-apiserver\"}":{".":{},"f:args":{},"f:command":{},"f:env":{".":{},"k:{\"name\":\"POD_NAME\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:fieldRef":{}}},"k:{\"name\":\"POD_NAMESPACE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:fieldRef":{}}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":8443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:privileged":{},"f:runAsUser":{}},"f:startupProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/var/log/oauth-apiserver\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/configmaps/audit\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/configmaps/etcd-serving-ca\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/configmaps/trusted-ca-bundle\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/secrets/encryption-config\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/secrets/etcd-client\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/secrets/serving-cert\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:initContainers":{".":{},"k:{\"name\":\"fix-audit-permissions\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:privileged":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/var/log/oauth-apiserver\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"audit-dir\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"audit-policies\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"encryption-config\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:optional":{},"f:secretName":{}}},"k:{\"name\":\"etcd-client\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}},"k:{\"name\":\"etcd-serving-ca\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"serving-cert\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}},"k:{\"name\":\"trusted-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{},"f:optional":{}},"f:name":{}}}}} } {kube-scheduler Update v1 2025-11-05 04:59:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {ci-op-x0f88pwp-f3da4-d9fgd-master-1 Update v1 2025-11-05 05:00:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.ovn.org/pod-networks":{}}}} status} {multus-daemon Update v1 2025-11-05 05:00:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2025-11-05 05:04:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodReadyToStartContainers\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodScheduled\"}":{"f:observedGeneration":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:hostIPs":{},"f:initContainerStatuses":{},"f:observedGeneration":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.0.96\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:audit-policies,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:audit-1,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:etcd-client,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:etcd-client,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:etcd-serving-ca,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:etcd-serving-ca,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:serving-cert,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:serving-cert,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:trusted-ca-bundle,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:trusted-ca-bundle,},Items:[]KeyToPath{KeyToPath{Key:ca-bundle.crt,Path:tls-ca-bundle.pem,Mode:nil,},},DefaultMode:*420,Optional:*true,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:encryption-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:encryption-config-1,Items:[]KeyToPath{},DefaultMode:*420,Optional:*true,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:audit-dir,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/var/log/oauth-apiserver,Type:*,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:kube-api-access-zplpb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},},Containers:[]Container{Container{Name:oauth-apiserver,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:0ebeb6e774700507cc97ce2888745f0087a6e8839af5f36fdae7967be7049335,Command:[/bin/bash -ec],Args:[if [ -s /var/run/configmaps/trusted-ca-bundle/tls-ca-bundle.pem ]; then echo "Copying system trust bundle" cp -f /var/run/configmaps/trusted-ca-bundle/tls-ca-bundle.pem /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem fi exec oauth-apiserver start \ --secure-port=8443 \ --audit-log-path=/var/log/oauth-apiserver/audit.log \ --audit-log-format=json \ --audit-log-maxsize=100 \ --audit-log-maxbackup=10 \ --audit-policy-file=/var/run/configmaps/audit/policy.yaml \ --etcd-cafile=/var/run/configmaps/etcd-serving-ca/ca-bundle.crt \ --etcd-keyfile=/var/run/secrets/etcd-client/tls.key \ --etcd-certfile=/var/run/secrets/etcd-client/tls.crt \ --etcd-healthcheck-timeout=9s \ --etcd-readycheck-timeout=9s \ --shutdown-delay-duration=50s \ --shutdown-send-retry-after=true \ --tls-private-key-file=/var/run/secrets/serving-cert/tls.key \ --tls-cert-file=/var/run/secrets/serving-cert/tls.crt \ --enable-priority-and-fairness=false \ --api-audiences=https://kubernetes.default.svc \ --cors-allowed-origins='//127\.0\.0\.1(:|$)' \ --cors-allowed-origins='//localhost(:|$)' \ --etcd-servers=https://10.0.0.3:2379 \ --etcd-servers=https://10.0.0.5:2379 \ --etcd-servers=https://10.0.0.7:2379 \ --feature-gates=CBORServingAndStorage=false \ --feature-gates=ClientsAllowCBOR=false \ --feature-gates=ClientsPreferCBOR=false \ --tls-cipher-suites=TLS_AES_128_GCM_SHA256 \ --tls-cipher-suites=TLS_AES_256_GCM_SHA384 \ --tls-cipher-suites=TLS_CHACHA20_POLY1305_SHA256 \ --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 \ --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 \ --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 \ --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 \ --tls-cipher-suites=TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 \ --tls-min-version=VersionTLS12 \ --v=2 ],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,FileKeyRef:nil,},},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,FileKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:audit-policies,ReadOnly:false,MountPath:/var/run/configmaps/audit,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-client,ReadOnly:false,MountPath:/var/run/secrets/etcd-client,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-serving-ca,ReadOnly:false,MountPath:/var/run/configmaps/etcd-serving-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:false,MountPath:/var/run/configmaps/trusted-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:encryption-config,ReadOnly:false,MountPath:/var/run/secrets/encryption-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:audit-dir,ReadOnly:false,MountPath:/var/log/oauth-apiserver,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zplpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:livez?exclude=etcd,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:readyz?exclude=etcd&exclude=etcd-readiness,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:livez,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:30,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*120,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: ,},ServiceAccountName:oauth-apiserver-sa,DeprecatedServiceAccount:oauth-apiserver-sa,NodeName:ci-op-x0f88pwp-f3da4-d9fgd-master-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,AppArmorProfile:nil,SupplementalGroupsPolicy:nil,SELinuxChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:oauth-apiserver-sa-dockercfg-gfhxc,},},Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:nil,PodAffinity:nil,PodAntiAffinity:&PodAntiAffinity{RequiredDuringSchedulingIgnoredDuringExecution:[]PodAffinityTerm{PodAffinityTerm{LabelSelector:&v1.LabelSelector{MatchLabels:map[string]string{apiserver: true,app: openshift-oauth-apiserver,oauth-apiserver-anti-affinity: true,},MatchExpressions:[]LabelSelectorRequirement{},},Namespaces:[],TopologyKey:kubernetes.io/hostname,NamespaceSelector:nil,MatchLabelKeys:[],MismatchLabelKeys:[],},},PreferredDuringSchedulingIgnoredDuringExecution:[]WeightedPodAffinityTerm{},},},SchedulerName:default-scheduler,InitContainers:[]Container{Container{Name:fix-audit-permissions,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:0ebeb6e774700507cc97ce2888745f0087a6e8839af5f36fdae7967be7049335,Command:[sh -c chmod 0700 /var/log/oauth-apiserver && touch /var/log/oauth-apiserver/audit.log && chmod 0600 /var/log/oauth-apiserver/*],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{15 -3} {} 15m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:audit-dir,ReadOnly:false,MountPath:/var/log/oauth-apiserver,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zplpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node-role.kubernetes.io/master,Operator:Exists,Value:,Effect:NoSchedule,TolerationSeconds:nil,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*120,},Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*120,},Toleration{Key:node.kubernetes.io/memory-pressure,Operator:Exists,Value:,Effect:NoSchedule,TolerationSeconds:nil,},},HostAliases:[]HostAlias{},PriorityClassName:system-node-critical,Priority:*2000001000,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},Resources:nil,HostnameOverride:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:DisruptionTarget,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:04:00 +0000 UTC,Reason:EvictionByEvictionAPI,Message:Eviction API: evicting,ObservedGeneration:0,},PodCondition{Type:PodReadyToStartContainers,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:00:17 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:00:17 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:00:22 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:00:22 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:00:15 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},},Message:,Reason:,HostIP:10.0.0.3,PodIP:10.128.0.96,StartTime:2025-11-05 05:00:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:oauth-apiserver,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2025-11-05 05:00:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:0ebeb6e774700507cc97ce2888745f0087a6e8839af5f36fdae7967be7049335,ImageID:quay-proxy.ci.openshift.org/openshift/ci@sha256:0ebeb6e774700507cc97ce2888745f0087a6e8839af5f36fdae7967be7049335,ContainerID:cri-o://8114b43d45b07779ce5967cb1446961df3142dbd7900588754c6f7d51471f821,Started:*true,AllocatedResources:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{209715200 0} {} BinarySI},},Resources:&ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:audit-policies,MountPath:/var/run/configmaps/audit,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:etcd-client,MountPath:/var/run/secrets/etcd-client,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:etcd-serving-ca,MountPath:/var/run/configmaps/etcd-serving-ca,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:trusted-ca-bundle,MountPath:/var/run/configmaps/trusted-ca-bundle,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:serving-cert,MountPath:/var/run/secrets/serving-cert,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:encryption-config,MountPath:/var/run/secrets/encryption-config,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:audit-dir,MountPath:/var/log/oauth-apiserver,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:kube-api-access-zplpb,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:&ContainerUser{Linux:&LinuxContainerUser{UID:0,GID:0,SupplementalGroups:[0],},},AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},},QOSClass:Burstable,InitContainerStatuses:[]ContainerStatus{ContainerStatus{Name:fix-audit-permissions,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-11-05 05:00:16 +0000 UTC,FinishedAt:2025-11-05 05:00:17 +0000 UTC,ContainerID:cri-o://44f5a36df16175c65ac77d915649fe3f94444c2bb6a3d53844eea92fb3525251,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:0ebeb6e774700507cc97ce2888745f0087a6e8839af5f36fdae7967be7049335,ImageID:quay-proxy.ci.openshift.org/openshift/ci@sha256:0ebeb6e774700507cc97ce2888745f0087a6e8839af5f36fdae7967be7049335,ContainerID:cri-o://44f5a36df16175c65ac77d915649fe3f94444c2bb6a3d53844eea92fb3525251,Started:*false,AllocatedResources:ResourceList{cpu: {{15 -3} {} 15m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Resources:&ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{15 -3} {} 15m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:audit-dir,MountPath:/var/log/oauth-apiserver,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:kube-api-access-zplpb,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:&ContainerUser{Linux:&LinuxContainerUser{UID:0,GID:0,SupplementalGroups:[0],},},AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.0.96,},},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,ResourceClaimStatuses:[]PodResourceClaimStatus{},HostIPs:[]HostIP{HostIP{IP:10.0.0.3,},},ObservedGeneration:2,ExtendedResourceClaimStatus:nil,},}}': object has no meta: object does not implement the Object interfaces > E1105 05:07:09.081512 1669 pod_ip_controller.go:75] "Unhandled Error" err=< invalid queue key '{openshift-console/console-b5bbd99c7-f4lnt &Pod{ObjectMeta:{console-b5bbd99c7-f4lnt console-b5bbd99c7- openshift-console d059b0d0-d47b-4207-8cfa-ba38185d5722 49622 2 2025-11-05 04:30:53 +0000 UTC 2025-11-05 05:04:40 +0000 UTC 0xc008122db8 map[app:console component:ui pod-template-hash:b5bbd99c7] map[console.openshift.io/authn-ca-trust-config-version:8856 console.openshift.io/console-config-version:31762 console.openshift.io/image:quay-proxy.ci.openshift.org/openshift/ci@sha256:eddfd026b22cbafd371239da9cd47b94a2cd46ce15debb01bdeaeafaaee3bdb0 console.openshift.io/infrastructure-config-version:549 console.openshift.io/oauth-secret-version:7661 console.openshift.io/proxy-config-version:567 console.openshift.io/service-ca-config-version:26166 console.openshift.io/trusted-ca-config-version:26212 k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.128.0.83/23"],"mac_address":"0a:58:0a:80:00:53","gateway_ips":["10.128.0.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.128.0.1"},{"dest":"172.30.0.0/16","nextHop":"10.128.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.128.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.128.0.1"}],"ip_address":"10.128.0.83/23","gateway_ip":"10.128.0.1","role":"primary"}} k8s.v1.cni.cncf.io/network-status:[{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.128.0.83" ], "mac": "0a:58:0a:80:00:53", "default": true, "dns": {} }] openshift.io/required-scc:restricted-v2 openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default security.openshift.io/validated-scc-subject-type:user] [{apps/v1 ReplicaSet console-b5bbd99c7 3af2976f-0c07-4a35-84bc-dabeb9753065 0xc008122e67 0xc008122e68}] [] [{ci-op-x0f88pwp-f3da4-d9fgd-master-1 Update v1 2025-11-05 04:30:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.ovn.org/pod-networks":{}}}} status} {kube-controller-manager Update v1 2025-11-05 04:30:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:console.openshift.io/authn-ca-trust-config-version":{},"f:console.openshift.io/console-config-version":{},"f:console.openshift.io/image":{},"f:console.openshift.io/infrastructure-config-version":{},"f:console.openshift.io/oauth-secret-version":{},"f:console.openshift.io/proxy-config-version":{},"f:console.openshift.io/service-ca-config-version":{},"f:console.openshift.io/trusted-ca-config-version":{},"f:openshift.io/required-scc":{},"f:target.workload.openshift.io/management":{}},"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:component":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3af2976f-0c07-4a35-84bc-dabeb9753065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"console\"}":{".":{},"f:command":{},"f:env":{".":{},"k:{\"name\":\"POD_NAME\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:fieldRef":{}}}},"f:image":{},"f:imagePullPolicy":{},"f:lifecycle":{".":{},"f:preStop":{".":{},"f:exec":{".":{},"f:command":{}}}},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":8443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:startupProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/pki/ca-trust/extracted/pem\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/console-config\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/oauth-config\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/oauth-serving-cert\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/service-ca\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/serving-cert\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{".":{},"f:runAsNonRoot":{},"f:seccompProfile":{".":{},"f:type":{}}},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"console-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"console-oauth-config\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}},"k:{\"name\":\"console-serving-cert\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}},"k:{\"name\":\"oauth-serving-cert\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"service-ca\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"trusted-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} } {multus-daemon Update v1 2025-11-05 04:30:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2025-11-05 05:04:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodReadyToStartContainers\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodScheduled\"}":{"f:observedGeneration":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:hostIPs":{},"f:observedGeneration":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.0.83\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:console-serving-cert,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:console-serving-cert,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:console-oauth-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:console-oauth-config,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:console-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:console-config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:service-ca,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:service-ca,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:trusted-ca-bundle,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:trusted-ca-bundle,},Items:[]KeyToPath{KeyToPath{Key:ca-bundle.crt,Path:tls-ca-bundle.pem,Mode:nil,},},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:oauth-serving-cert,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:oauth-serving-cert,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:kube-api-access-b5zp5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},},Containers:[]Container{Container{Name:console,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:eddfd026b22cbafd371239da9cd47b94a2cd46ce15debb01bdeaeafaaee3bdb0,Command:[/opt/bridge/bin/bridge --public-dir=/opt/bridge/static --config=/var/console-config/console-config.yaml --service-ca-file=/var/service-ca/service-ca.crt --v=2],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,FileKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{104857600 0} {} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:console-serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:console-oauth-config,ReadOnly:true,MountPath:/var/oauth-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:console-config,ReadOnly:true,MountPath:/var/console-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:service-ca,ReadOnly:true,MountPath:/var/service-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:oauth-serving-cert,ReadOnly:true,MountPath:/var/oauth-serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5zp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:1,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[sleep 25],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},StopSignal:nil,},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000480000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:30,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*40,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: ,},ServiceAccountName:console,DeprecatedServiceAccount:console,NodeName:ci-op-x0f88pwp-f3da4-d9fgd-master-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c22,c9,},RunAsUser:nil,RunAsNonRoot:*true,SupplementalGroups:[],FSGroup:*1000480000,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,SupplementalGroupsPolicy:nil,SELinuxChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:console-dockercfg-fjplt,},},Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:nil,PodAffinity:nil,PodAntiAffinity:&PodAntiAffinity{RequiredDuringSchedulingIgnoredDuringExecution:[]PodAffinityTerm{PodAffinityTerm{LabelSelector:&v1.LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{LabelSelectorRequirement{Key:component,Operator:In,Values:[ui],},},},Namespaces:[],TopologyKey:kubernetes.io/hostname,NamespaceSelector:nil,MatchLabelKeys:[],MismatchLabelKeys:[],},},PreferredDuringSchedulingIgnoredDuringExecution:[]WeightedPodAffinityTerm{},},},SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node-role.kubernetes.io/master,Operator:Exists,Value:,Effect:NoSchedule,TolerationSeconds:nil,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*120,},Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/memory-pressure,Operator:Exists,Value:,Effect:NoSchedule,TolerationSeconds:nil,},},HostAliases:[]HostAlias{},PriorityClassName:system-cluster-critical,Priority:*2000000000,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},Resources:nil,HostnameOverride:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:DisruptionTarget,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:04:00 +0000 UTC,Reason:EvictionByEvictionAPI,Message:Eviction API: evicting,ObservedGeneration:0,},PodCondition{Type:PodReadyToStartContainers,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 04:30:54 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 04:30:53 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 04:31:04 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 04:31:04 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 04:30:53 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},},Message:,Reason:,HostIP:10.0.0.3,PodIP:10.128.0.83,StartTime:2025-11-05 04:30:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:console,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2025-11-05 04:30:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:eddfd026b22cbafd371239da9cd47b94a2cd46ce15debb01bdeaeafaaee3bdb0,ImageID:quay-proxy.ci.openshift.org/openshift/ci@sha256:0f5f6f3ad9632af49da4ef6ba96b8e219287fdf6bbd113d7fe0af7a2d73b3c48,ContainerID:cri-o://8acea3089a66557754c7ecfda03e135856192bccf202fb5ff4b915c7ea4232f4,Started:*true,AllocatedResources:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{104857600 0} {} 100Mi BinarySI},},Resources:&ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{104857600 0} {} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:console-serving-cert,MountPath:/var/serving-cert,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:console-oauth-config,MountPath:/var/oauth-config,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:console-config,MountPath:/var/console-config,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:service-ca,MountPath:/var/service-ca,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:trusted-ca-bundle,MountPath:/etc/pki/ca-trust/extracted/pem,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:oauth-serving-cert,MountPath:/var/oauth-serving-cert,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:kube-api-access-b5zp5,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:&ContainerUser{Linux:&LinuxContainerUser{UID:1000480000,GID:0,SupplementalGroups:[0 1000480000],},},AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},},QOSClass:Burstable,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.0.83,},},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,ResourceClaimStatuses:[]PodResourceClaimStatus{},HostIPs:[]HostIP{HostIP{IP:10.0.0.3,},},ObservedGeneration:2,ExtendedResourceClaimStatus:nil,},}}': object has no meta: object does not implement the Object interfaces > E1105 05:07:09.083362 1669 pod_ip_controller.go:75] "Unhandled Error" err=< invalid queue key '{openshift-authentication/oauth-openshift-85b9b447d5-9b7vv &Pod{ObjectMeta:{oauth-openshift-85b9b447d5-9b7vv oauth-openshift-85b9b447d5- openshift-authentication e571c97b-f68f-4c23-9630-dc454a77b12d 49366 2 2025-11-05 04:24:19 +0000 UTC 2025-11-05 05:04:40 +0000 UTC 0xc009a093b8 map[app:oauth-openshift oauth-openshift-anti-affinity:true pod-template-hash:85b9b447d5] map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.128.0.75/23"],"mac_address":"0a:58:0a:80:00:4b","gateway_ips":["10.128.0.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.128.0.1"},{"dest":"172.30.0.0/16","nextHop":"10.128.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.128.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.128.0.1"}],"ip_address":"10.128.0.75/23","gateway_ip":"10.128.0.1","role":"primary"}} k8s.v1.cni.cncf.io/network-status:[{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.128.0.75" ], "mac": "0a:58:0a:80:00:4b", "default": true, "dns": {} }] openshift.io/required-scc:privileged openshift.io/scc:privileged operator.openshift.io/bootstrap-user-exists:true operator.openshift.io/rvs-hash:CQ0mX1AxMiRTxm4gVn5--9hmpBw0UscZuBXUrqhNF3BUeP9rD2dycsT_JudyObeE_C9qhqYABZMAtGMLfnXeog security.openshift.io/validated-scc-subject-type:serviceaccount] [{apps/v1 ReplicaSet oauth-openshift-85b9b447d5 4600d29f-9a72-4d8d-ba56-9930e5d24eb5 0xc009a09467 0xc009a09468}] [] [{kube-controller-manager Update v1 2025-11-05 04:24:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:openshift.io/required-scc":{},"f:operator.openshift.io/bootstrap-user-exists":{},"f:operator.openshift.io/rvs-hash":{},"f:target.workload.openshift.io/management":{}},"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:oauth-openshift-anti-affinity":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4600d29f-9a72-4d8d-ba56-9930e5d24eb5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"oauth-openshift\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:lifecycle":{".":{},"f:preStop":{".":{},"f:exec":{".":{},"f:command":{}}}},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":6443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:privileged":{},"f:readOnlyRootFilesystem":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/var/config/system/configmaps/v4-0-config-system-cliconfig\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/config/system/configmaps/v4-0-config-system-service-ca\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/config/system/secrets/v4-0-config-system-ocp-branding-template\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/config/system/secrets/v4-0-config-system-router-certs\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/config/system/secrets/v4-0-config-system-serving-cert\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/config/system/secrets/v4-0-config-system-session\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/config/user/template/secret/v4-0-config-user-template-error\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/config/user/template/secret/v4-0-config-user-template-login\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/config/user/template/secret/v4-0-config-user-template-provider-selection\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/log/oauth-server\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/configmaps/audit\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"audit-dir\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"audit-policies\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"v4-0-config-system-cliconfig\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"v4-0-config-system-ocp-branding-template\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}},"k:{\"name\":\"v4-0-config-system-router-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}},"k:{\"name\":\"v4-0-config-system-service-ca\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"v4-0-config-system-serving-cert\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}},"k:{\"name\":\"v4-0-config-system-session\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}},"k:{\"name\":\"v4-0-config-system-trusted-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{},"f:optional":{}},"f:name":{}},"k:{\"name\":\"v4-0-config-user-template-error\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:optional":{},"f:secretName":{}}},"k:{\"name\":\"v4-0-config-user-template-login\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:optional":{},"f:secretName":{}}},"k:{\"name\":\"v4-0-config-user-template-provider-selection\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:optional":{},"f:secretName":{}}}}}} } {kube-scheduler Update v1 2025-11-05 04:24:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {ci-op-x0f88pwp-f3da4-d9fgd-master-1 Update v1 2025-11-05 04:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.ovn.org/pod-networks":{}}}} status} {multus-daemon Update v1 2025-11-05 04:24:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2025-11-05 05:04:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodReadyToStartContainers\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodScheduled\"}":{"f:observedGeneration":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:hostIPs":{},"f:observedGeneration":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.0.75\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:audit-policies,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:audit,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:audit-dir,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/var/log/oauth-server,Type:*,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:v4-0-config-system-session,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:v4-0-config-system-session,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:v4-0-config-system-cliconfig,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:v4-0-config-system-cliconfig,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:v4-0-config-system-serving-cert,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:v4-0-config-system-serving-cert,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:v4-0-config-system-service-ca,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:v4-0-config-system-service-ca,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:v4-0-config-system-router-certs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:v4-0-config-system-router-certs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:v4-0-config-system-ocp-branding-template,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:v4-0-config-system-ocp-branding-template,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:v4-0-config-user-template-login,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:v4-0-config-user-template-login,Items:[]KeyToPath{},DefaultMode:*420,Optional:*true,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:v4-0-config-user-template-provider-selection,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:v4-0-config-user-template-provider-selection,Items:[]KeyToPath{},DefaultMode:*420,Optional:*true,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:v4-0-config-user-template-error,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:v4-0-config-user-template-error,Items:[]KeyToPath{},DefaultMode:*420,Optional:*true,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:v4-0-config-system-trusted-ca-bundle,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:v4-0-config-system-trusted-ca-bundle,},Items:[]KeyToPath{},DefaultMode:*420,Optional:*true,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:kube-api-access-hsxp8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},},Containers:[]Container{Container{Name:oauth-openshift,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:8a79b5e5c434dee6727ec517a9ec7b555e7b6bff041dc98532dae08b36ef5fb4,Command:[/bin/bash -ec],Args:[if [ -s /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt ]; then echo "Copying system trust bundle" cp -f /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem fi exec oauth-server osinserver \ --config=/var/config/system/configmaps/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig \ --v=2 \ --audit-log-format=json \ --audit-log-maxbackup=10 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/oauth-server/audit.log \ --audit-policy-file=/var/run/configmaps/audit/audit.yaml ],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:6443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:audit-policies,ReadOnly:false,MountPath:/var/run/configmaps/audit,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:audit-dir,ReadOnly:false,MountPath:/var/log/oauth-server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:v4-0-config-system-session,ReadOnly:true,MountPath:/var/config/system/secrets/v4-0-config-system-session,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:v4-0-config-system-cliconfig,ReadOnly:true,MountPath:/var/config/system/configmaps/v4-0-config-system-cliconfig,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:v4-0-config-system-serving-cert,ReadOnly:true,MountPath:/var/config/system/secrets/v4-0-config-system-serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:v4-0-config-system-service-ca,ReadOnly:true,MountPath:/var/config/system/configmaps/v4-0-config-system-service-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:v4-0-config-system-router-certs,ReadOnly:true,MountPath:/var/config/system/secrets/v4-0-config-system-router-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:v4-0-config-system-ocp-branding-template,ReadOnly:true,MountPath:/var/config/system/secrets/v4-0-config-system-ocp-branding-template,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:v4-0-config-user-template-login,ReadOnly:true,MountPath:/var/config/user/template/secret/v4-0-config-user-template-login,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:v4-0-config-user-template-provider-selection,ReadOnly:true,MountPath:/var/config/user/template/secret/v4-0-config-user-template-provider-selection,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:v4-0-config-user-template-error,ReadOnly:true,MountPath:/var/config/user/template/secret/v4-0-config-user-template-error,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:v4-0-config-system-trusted-ca-bundle,ReadOnly:true,MountPath:/var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsxp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 6443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 6443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[sleep 25],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},StopSignal:nil,},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*40,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: ,},ServiceAccountName:oauth-openshift,DeprecatedServiceAccount:oauth-openshift,NodeName:ci-op-x0f88pwp-f3da4-d9fgd-master-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,AppArmorProfile:nil,SupplementalGroupsPolicy:nil,SELinuxChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:oauth-openshift-dockercfg-zk62q,},},Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:nil,PodAffinity:nil,PodAntiAffinity:&PodAntiAffinity{RequiredDuringSchedulingIgnoredDuringExecution:[]PodAffinityTerm{PodAffinityTerm{LabelSelector:&v1.LabelSelector{MatchLabels:map[string]string{app: oauth-openshift,oauth-openshift-anti-affinity: true,},MatchExpressions:[]LabelSelectorRequirement{},},Namespaces:[],TopologyKey:kubernetes.io/hostname,NamespaceSelector:nil,MatchLabelKeys:[],MismatchLabelKeys:[],},},PreferredDuringSchedulingIgnoredDuringExecution:[]WeightedPodAffinityTerm{},},},SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node-role.kubernetes.io/master,Operator:Exists,Value:,Effect:NoSchedule,TolerationSeconds:nil,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*120,},Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*120,},Toleration{Key:node.kubernetes.io/memory-pressure,Operator:Exists,Value:,Effect:NoSchedule,TolerationSeconds:nil,},},HostAliases:[]HostAlias{},PriorityClassName:system-cluster-critical,Priority:*2000000000,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},Resources:nil,HostnameOverride:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:DisruptionTarget,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:04:00 +0000 UTC,Reason:EvictionByEvictionAPI,Message:Eviction API: evicting,ObservedGeneration:0,},PodCondition{Type:PodReadyToStartContainers,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 04:24:55 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 04:24:53 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 04:24:55 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 04:24:55 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 04:24:53 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},},Message:,Reason:,HostIP:10.0.0.3,PodIP:10.128.0.75,StartTime:2025-11-05 04:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:oauth-openshift,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2025-11-05 04:24:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:8a79b5e5c434dee6727ec517a9ec7b555e7b6bff041dc98532dae08b36ef5fb4,ImageID:quay-proxy.ci.openshift.org/openshift/ci@sha256:8a79b5e5c434dee6727ec517a9ec7b555e7b6bff041dc98532dae08b36ef5fb4,ContainerID:cri-o://c4d7f60807928a0dcd0d5ab99463df186cf123f2963ab045fcf2238564c84b32,Started:*true,AllocatedResources:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Resources:&ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:audit-policies,MountPath:/var/run/configmaps/audit,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:audit-dir,MountPath:/var/log/oauth-server,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:v4-0-config-system-session,MountPath:/var/config/system/secrets/v4-0-config-system-session,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:v4-0-config-system-cliconfig,MountPath:/var/config/system/configmaps/v4-0-config-system-cliconfig,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:v4-0-config-system-serving-cert,MountPath:/var/config/system/secrets/v4-0-config-system-serving-cert,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:v4-0-config-system-service-ca,MountPath:/var/config/system/configmaps/v4-0-config-system-service-ca,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:v4-0-config-system-router-certs,MountPath:/var/config/system/secrets/v4-0-config-system-router-certs,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:v4-0-config-system-ocp-branding-template,MountPath:/var/config/system/secrets/v4-0-config-system-ocp-branding-template,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:v4-0-config-user-template-login,MountPath:/var/config/user/template/secret/v4-0-config-user-template-login,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:v4-0-config-user-template-provider-selection,MountPath:/var/config/user/template/secret/v4-0-config-user-template-provider-selection,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:v4-0-config-user-template-error,MountPath:/var/config/user/template/secret/v4-0-config-user-template-error,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:v4-0-config-system-trusted-ca-bundle,MountPath:/var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:kube-api-access-hsxp8,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:&ContainerUser{Linux:&LinuxContainerUser{UID:0,GID:0,SupplementalGroups:[0],},},AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},},QOSClass:Burstable,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.0.75,},},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,ResourceClaimStatuses:[]PodResourceClaimStatus{},HostIPs:[]HostIP{HostIP{IP:10.0.0.3,},},ObservedGeneration:2,ExtendedResourceClaimStatus:nil,},}}': object has no meta: object does not implement the Object interfaces > E1105 05:07:09.084167 1669 pod_ip_controller.go:75] "Unhandled Error" err=< invalid queue key '{openshift-apiserver/apiserver-77dcb99c96-qz8dc &Pod{ObjectMeta:{apiserver-77dcb99c96-qz8dc apiserver-77dcb99c96- openshift-apiserver 23011f0e-782b-42d2-b2cb-f4da26368388 49988 2 2025-11-05 04:28:16 +0000 UTC 2025-11-05 05:04:53 +0000 UTC 0xc00ca7a4e0 map[apiserver:true app:openshift-apiserver-a openshift-apiserver-anti-affinity:true pod-template-hash:77dcb99c96 revision:1] map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.128.0.79/23"],"mac_address":"0a:58:0a:80:00:4f","gateway_ips":["10.128.0.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.128.0.1"},{"dest":"172.30.0.0/16","nextHop":"10.128.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.128.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.128.0.1"}],"ip_address":"10.128.0.79/23","gateway_ip":"10.128.0.1","role":"primary"}} k8s.v1.cni.cncf.io/network-status:[{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.128.0.79" ], "mac": "0a:58:0a:80:00:4f", "default": true, "dns": {} }] openshift.io/required-scc:privileged openshift.io/scc:privileged operator.openshift.io/dep-desired.generation:7 operator.openshift.io/dep-openshift-apiserver.config.configmap:XLQeZw== operator.openshift.io/dep-openshift-apiserver.etcd-client.secret:odMusQ== operator.openshift.io/dep-openshift-apiserver.etcd-serving-ca.configmap:bod41Q== operator.openshift.io/dep-openshift-apiserver.image-import-ca.configmap:aV9avg== operator.openshift.io/dep-openshift-apiserver.trusted-ca-bundle.configmap:ElMHxA== security.openshift.io/validated-scc-subject-type:serviceaccount] [{apps/v1 ReplicaSet apiserver-77dcb99c96 5a76fb5f-8ccb-4fd2-bfac-06a60e594a42 0xc00ca7a607 0xc00ca7a608}] [] [{kube-controller-manager Update v1 2025-11-05 04:28:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:openshift.io/required-scc":{},"f:operator.openshift.io/dep-desired.generation":{},"f:operator.openshift.io/dep-openshift-apiserver.config.configmap":{},"f:operator.openshift.io/dep-openshift-apiserver.etcd-client.secret":{},"f:operator.openshift.io/dep-openshift-apiserver.etcd-serving-ca.configmap":{},"f:operator.openshift.io/dep-openshift-apiserver.image-import-ca.configmap":{},"f:operator.openshift.io/dep-openshift-apiserver.trusted-ca-bundle.configmap":{},"f:target.workload.openshift.io/management":{}},"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:openshift-apiserver-anti-affinity":{},"f:pod-template-hash":{},"f:revision":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a76fb5f-8ccb-4fd2-bfac-06a60e594a42\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"openshift-apiserver\"}":{".":{},"f:args":{},"f:command":{},"f:env":{".":{},"k:{\"name\":\"POD_NAME\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:fieldRef":{}}},"k:{\"name\":\"POD_NAMESPACE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:fieldRef":{}}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":8443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:privileged":{},"f:readOnlyRootFilesystem":{},"f:runAsUser":{}},"f:startupProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/var/lib/kubelet/\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}},"k:{\"mountPath\":\"/var/log/openshift-apiserver\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/configmaps/audit\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/configmaps/config\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/configmaps/etcd-serving-ca\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/configmaps/image-import-ca\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/configmaps/trusted-ca-bundle\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/secrets/encryption-config\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/secrets/etcd-client\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/secrets/serving-cert\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"openshift-apiserver-check-endpoints\"}":{".":{},"f:args":{},"f:command":{},"f:env":{".":{},"k:{\"name\":\"POD_NAME\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:fieldRef":{}}},"k:{\"name\":\"POD_NAMESPACE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:fieldRef":{}}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":17698,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:initContainers":{".":{},"k:{\"name\":\"fix-audit-permissions\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:privileged":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/var/log/openshift-apiserver\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"audit\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"audit-dir\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"encryption-config\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:optional":{},"f:secretName":{}}},"k:{\"name\":\"etcd-client\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}},"k:{\"name\":\"etcd-serving-ca\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}},"k:{\"name\":\"image-import-ca\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{},"f:optional":{}},"f:name":{}},"k:{\"name\":\"node-pullsecrets\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"serving-cert\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}},"k:{\"name\":\"trusted-ca-bundle\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{},"f:optional":{}},"f:name":{}}}}} } {kube-scheduler Update v1 2025-11-05 04:28:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {ci-op-x0f88pwp-f3da4-d9fgd-master-1 Update v1 2025-11-05 04:28:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.ovn.org/pod-networks":{}}}} status} {multus-daemon Update v1 2025-11-05 04:28:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2025-11-05 05:03:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:observedGeneration":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodReadyToStartContainers\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodScheduled\"}":{"f:observedGeneration":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:observedGeneration":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:hostIPs":{},"f:initContainerStatuses":{},"f:observedGeneration":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.0.79\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:node-pullsecrets,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/var/lib/kubelet/,Type:*Directory,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:config,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:audit,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:audit-1,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:etcd-client,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:etcd-client,Items:[]KeyToPath{},DefaultMode:*384,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:etcd-serving-ca,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:etcd-serving-ca,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:image-import-ca,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:image-import-ca,},Items:[]KeyToPath{},DefaultMode:*420,Optional:*true,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:serving-cert,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:serving-cert,Items:[]KeyToPath{},DefaultMode:*384,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:trusted-ca-bundle,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:trusted-ca-bundle,},Items:[]KeyToPath{KeyToPath{Key:ca-bundle.crt,Path:tls-ca-bundle.pem,Mode:nil,},},DefaultMode:*420,Optional:*true,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:encryption-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:encryption-config-1,Items:[]KeyToPath{},DefaultMode:*384,Optional:*true,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:audit-dir,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/var/log/openshift-apiserver,Type:*,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:kube-api-access-dqmzp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},},Containers:[]Container{Container{Name:openshift-apiserver,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:4139ff3d425af304243a5b251be8a08b0388458ebd6752e91ad983e415eb04eb,Command:[/bin/bash -ec],Args:[if [ -s /var/run/configmaps/trusted-ca-bundle/tls-ca-bundle.pem ]; then echo "Copying system trust bundle" cp -f /var/run/configmaps/trusted-ca-bundle/tls-ca-bundle.pem /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem fi exec openshift-apiserver start --config=/var/run/configmaps/config/config.yaml -v=2 ],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,FileKeyRef:nil,},},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,FileKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:node-pullsecrets,ReadOnly:true,MountPath:/var/lib/kubelet/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:audit,ReadOnly:false,MountPath:/var/run/configmaps/audit,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-client,ReadOnly:false,MountPath:/var/run/secrets/etcd-client,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-serving-ca,ReadOnly:false,MountPath:/var/run/configmaps/etcd-serving-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:image-import-ca,ReadOnly:false,MountPath:/var/run/configmaps/image-import-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:false,MountPath:/var/run/configmaps/trusted-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:encryption-config,ReadOnly:false,MountPath:/var/run/secrets/encryption-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:audit-dir,ReadOnly:false,MountPath:/var/log/openshift-apiserver,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dqmzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:livez?exclude=etcd,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:readyz?exclude=etcd&exclude=etcd-readiness,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:livez,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:30,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},Container{Name:openshift-apiserver-check-endpoints,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:2b4a7094f94bb39adc6827f1d01aa1ef3734eff3d3f87d18b9a3641f111dae14,Command:[cluster-kube-apiserver-operator check-endpoints],Args:[--listen 0.0.0.0:17698 --namespace $(POD_NAMESPACE) --v 2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:check-endpoints,HostPort:0,ContainerPort:17698,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,FileKeyRef:nil,},},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,FileKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dqmzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*120,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: ,},ServiceAccountName:openshift-apiserver-sa,DeprecatedServiceAccount:openshift-apiserver-sa,NodeName:ci-op-x0f88pwp-f3da4-d9fgd-master-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,AppArmorProfile:nil,SupplementalGroupsPolicy:nil,SELinuxChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:openshift-apiserver-sa-dockercfg-jv6fg,},},Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:nil,PodAffinity:nil,PodAntiAffinity:&PodAntiAffinity{RequiredDuringSchedulingIgnoredDuringExecution:[]PodAffinityTerm{PodAffinityTerm{LabelSelector:&v1.LabelSelector{MatchLabels:map[string]string{apiserver: true,app: openshift-apiserver-a,openshift-apiserver-anti-affinity: true,},MatchExpressions:[]LabelSelectorRequirement{},},Namespaces:[],TopologyKey:kubernetes.io/hostname,NamespaceSelector:nil,MatchLabelKeys:[],MismatchLabelKeys:[],},},PreferredDuringSchedulingIgnoredDuringExecution:[]WeightedPodAffinityTerm{},},},SchedulerName:default-scheduler,InitContainers:[]Container{Container{Name:fix-audit-permissions,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:4139ff3d425af304243a5b251be8a08b0388458ebd6752e91ad983e415eb04eb,Command:[sh -c chmod 0700 /var/log/openshift-apiserver && touch /var/log/openshift-apiserver/audit.log && chmod 0600 /var/log/openshift-apiserver/*],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{15 -3} {} 15m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:audit-dir,ReadOnly:false,MountPath:/var/log/openshift-apiserver,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dqmzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node-role.kubernetes.io/master,Operator:Exists,Value:,Effect:NoSchedule,TolerationSeconds:nil,},Toleration{Key:node-role.kubernetes.io/control-plane,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:nil,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*120,},Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*120,},Toleration{Key:node.kubernetes.io/memory-pressure,Operator:Exists,Value:,Effect:NoSchedule,TolerationSeconds:nil,},},HostAliases:[]HostAlias{},PriorityClassName:system-node-critical,Priority:*2000001000,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},Resources:nil,HostnameOverride:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:DisruptionTarget,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:04:00 +0000 UTC,Reason:EvictionByEvictionAPI,Message:Eviction API: evicting,ObservedGeneration:0,},PodCondition{Type:PodReadyToStartContainers,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 04:28:30 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 04:28:30 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:03:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [openshift-apiserver],ObservedGeneration:2,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:03:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [openshift-apiserver],ObservedGeneration:2,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 04:28:29 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},},Message:,Reason:,HostIP:10.0.0.3,PodIP:10.128.0.79,StartTime:2025-11-05 04:28:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:openshift-apiserver,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2025-11-05 04:28:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:4139ff3d425af304243a5b251be8a08b0388458ebd6752e91ad983e415eb04eb,ImageID:quay-proxy.ci.openshift.org/openshift/ci@sha256:4139ff3d425af304243a5b251be8a08b0388458ebd6752e91ad983e415eb04eb,ContainerID:cri-o://07a40000d541ea4e083db5e653da75ec0bcefa13d05549bef96671acb3f1c661,Started:*true,AllocatedResources:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{209715200 0} {} BinarySI},},Resources:&ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:node-pullsecrets,MountPath:/var/lib/kubelet/,ReadOnly:true,RecursiveReadOnly:*Disabled,},VolumeMountStatus{Name:config,MountPath:/var/run/configmaps/config,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:audit,MountPath:/var/run/configmaps/audit,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:etcd-client,MountPath:/var/run/secrets/etcd-client,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:etcd-serving-ca,MountPath:/var/run/configmaps/etcd-serving-ca,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:image-import-ca,MountPath:/var/run/configmaps/image-import-ca,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:trusted-ca-bundle,MountPath:/var/run/configmaps/trusted-ca-bundle,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:serving-cert,MountPath:/var/run/secrets/serving-cert,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:encryption-config,MountPath:/var/run/secrets/encryption-config,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:audit-dir,MountPath:/var/log/openshift-apiserver,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:kube-api-access-dqmzp,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:&ContainerUser{Linux:&LinuxContainerUser{UID:0,GID:0,SupplementalGroups:[0],},},AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},ContainerStatus{Name:openshift-apiserver-check-endpoints,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2025-11-05 04:28:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:2b4a7094f94bb39adc6827f1d01aa1ef3734eff3d3f87d18b9a3641f111dae14,ImageID:quay-proxy.ci.openshift.org/openshift/ci@sha256:2b4a7094f94bb39adc6827f1d01aa1ef3734eff3d3f87d18b9a3641f111dae14,ContainerID:cri-o://8d8ee433bf44b1a025dbb4a89f22391e75271de9111ae6b4eba696d3cb6fdcbc,Started:*true,AllocatedResources:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Resources:&ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:kube-api-access-dqmzp,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:&ContainerUser{Linux:&LinuxContainerUser{UID:0,GID:0,SupplementalGroups:[0],},},AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},},QOSClass:Burstable,InitContainerStatuses:[]ContainerStatus{ContainerStatus{Name:fix-audit-permissions,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-11-05 04:28:30 +0000 UTC,FinishedAt:2025-11-05 04:28:30 +0000 UTC,ContainerID:cri-o://d3afb071669aa3a19dc8b26e8d5b06a59912b24b3027fe33313d655cd6d5d3c3,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:4139ff3d425af304243a5b251be8a08b0388458ebd6752e91ad983e415eb04eb,ImageID:quay-proxy.ci.openshift.org/openshift/ci@sha256:4139ff3d425af304243a5b251be8a08b0388458ebd6752e91ad983e415eb04eb,ContainerID:cri-o://d3afb071669aa3a19dc8b26e8d5b06a59912b24b3027fe33313d655cd6d5d3c3,Started:*false,AllocatedResources:ResourceList{cpu: {{15 -3} {} 15m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Resources:&ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{15 -3} {} 15m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:audit-dir,MountPath:/var/log/openshift-apiserver,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:kube-api-access-dqmzp,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:&ContainerUser{Linux:&LinuxContainerUser{UID:0,GID:0,SupplementalGroups:[0],},},AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.0.79,},},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,ResourceClaimStatuses:[]PodResourceClaimStatus{},HostIPs:[]HostIP{HostIP{IP:10.0.0.3,},},ObservedGeneration:2,ExtendedResourceClaimStatus:nil,},}}': object has no meta: object does not implement the Object interfaces > E1105 05:07:09.085693 1669 pod_ip_controller.go:75] "Unhandled Error" err=< invalid queue key '{openshift-etcd/etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-1 &Pod{ObjectMeta:{etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-1 openshift-etcd b4f5ada7-83b8-48e0-bae0-6fa560f041f1 50049 2 2025-11-05 04:13:52 +0000 UTC 2025-11-05 05:04:16 +0000 UTC 0xc002610de8 map[app:guard] map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.128.0.53/23"],"mac_address":"0a:58:0a:80:00:35","gateway_ips":["10.128.0.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.128.0.1"},{"dest":"172.30.0.0/16","nextHop":"10.128.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.128.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.128.0.1"}],"ip_address":"10.128.0.53/23","gateway_ip":"10.128.0.1","role":"primary"}} k8s.v1.cni.cncf.io/network-status:[{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.128.0.53" ], "mac": "0a:58:0a:80:00:35", "default": true, "dns": {} }] target.workload.openshift.io/management:{"effect": "PreferredDuringScheduling"}] [] [] [{ci-op-x0f88pwp-f3da4-d9fgd-master-1 Update v1 2025-11-05 04:13:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.ovn.org/pod-networks":{}}}} status} {multus-daemon Update v1 2025-11-05 04:13:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {cluster-etcd-operator Update v1 2025-11-05 04:14:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:target.workload.openshift.io/management":{}},"f:labels":{".":{},"f:app":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"guard\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:host":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostname":{},"f:nodeName":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} } {kubelet Update v1 2025-11-05 05:04:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{".":{},"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodReadyToStartContainers\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:hostIPs":{},"f:observedGeneration":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.0.53\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hvj2n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},},Containers:[]Container{Container{Name:guard,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:dc570b7d57ccf227c0776bb60dae4405b32b71bb143dc8279fbbf1c7e7a71f26,Command:[/bin/bash],Args:[-c # properly handle TERM and exit as soon as it is signaled set -euo pipefail trap 'jobs -p | xargs -r kill; exit 0' TERM sleep infinity & wait ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{5242880 0} {} 5Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hvj2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:readyz,Port:{0 9980 },Host:10.0.0.3,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*3,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ci-op-x0f88pwp-f3da4-d9fgd-master-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,AppArmorProfile:nil,SupplementalGroupsPolicy:nil,SELinuxChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:guard-381dc1f35ae9dc10eb7713cdf9816315dc48d598-end,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:,Operator:Exists,Value:,Effect:,TolerationSeconds:nil,},},HostAliases:[]HostAlias{},PriorityClassName:system-cluster-critical,Priority:*2000000000,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},Resources:nil,HostnameOverride:nil,},Status:PodStatus{Phase:Succeeded,Conditions:[]PodCondition{PodCondition{Type:PodReadyToStartContainers,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:04:13 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 04:13:52 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:2,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:03:37 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:2,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:03:37 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:2,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 04:13:52 +0000 UTC,Reason:,Message:,ObservedGeneration:2,},},Message:,Reason:,HostIP:10.0.0.3,PodIP:10.128.0.53,StartTime:2025-11-05 04:13:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:guard,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-11-05 04:13:53 +0000 UTC,FinishedAt:2025-11-05 05:04:13 +0000 UTC,ContainerID:cri-o://fe08b6157aef3729867635501301f0c6fdc0e5772d1024b1f41631d080d02563,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:dc570b7d57ccf227c0776bb60dae4405b32b71bb143dc8279fbbf1c7e7a71f26,ImageID:quay-proxy.ci.openshift.org/openshift/ci@sha256:1a1a01258735184a4a2710257bf2a3a722a8d95c07908e5dfeba643426c4521b,ContainerID:cri-o://fe08b6157aef3729867635501301f0c6fdc0e5772d1024b1f41631d080d02563,Started:*false,AllocatedResources:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{5242880 0} {} 5Mi BinarySI},},Resources:&ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{5242880 0} {} 5Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:kube-api-access-hvj2n,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:&ContainerUser{Linux:&LinuxContainerUser{UID:0,GID:0,SupplementalGroups:[0],},},AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},},QOSClass:Burstable,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.0.53,},},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,ResourceClaimStatuses:[]PodResourceClaimStatus{},HostIPs:[]HostIP{HostIP{IP:10.0.0.3,},},ObservedGeneration:2,ExtendedResourceClaimStatus:nil,},}}': object has no meta: object does not implement the Object interfaces > time="2025-11-05T05:07:15Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-oauth-apiserver pod:apiserver-8645679b75-zjp54]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:07:15Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-apiserver pod:apiserver-6d96f44c85-pgsrg]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" I1105 05:07:22.420141 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:08:00Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[count:67 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T05:08:00Z reason:ProbeError]}" I1105 05:08:23.617571 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 05:08:29.524389 1669 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" type="*v1.Event" err="Internal error occurred: etcdserver: no leader" I1105 05:09:23.974650 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' E1105 05:09:25.822137 1669 pod_log_streamer.go:94] "Unhandled Error" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1)" E1105 05:09:27.147597 1669 pod_log_streamer.go:94] "Unhandled Error" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" I1105 05:09:30.444233 1669 trace.go:236] Trace[159034730]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290 (05-Nov-2025 05:08:30.440) (total time: 60003ms): Trace[159034730]: ---"Objects listed" error:the server was unable to return a response in the time allotted, but may still be processing the request (get events) 60003ms (05:09:30.444) Trace[159034730]: [1m0.003400648s] [1m0.003400648s] END E1105 05:09:30.444297 1669 reflector.go:205] "Failed to watch" err="failed to list *v1.Event: the server was unable to return a response in the time allotted, but may still be processing the request (get events)" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" type="*v1.Event" I1105 05:10:24.256546 1669 client.go:1078] Error running oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all: StdOut> Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterversions.config.openshift.io version) StdErr> Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterversions.config.openshift.io version) I1105 05:10:24.256759 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' E1105 05:10:25.826077 1669 pod_log_streamer.go:94] "Unhandled Error" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1)" E1105 05:10:27.151720 1669 pod_log_streamer.go:94] "Unhandled Error" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" I1105 05:10:32.354921 1669 trace.go:236] Trace[513100755]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290 (05-Nov-2025 05:09:32.349) (total time: 60005ms): Trace[513100755]: ---"Objects listed" error:the server was unable to return a response in the time allotted, but may still be processing the request (get events) 60005ms (05:10:32.354) Trace[513100755]: [1m0.005368334s] [1m0.005368334s] END E1105 05:10:32.355013 1669 reflector.go:205] "Failed to watch" err="failed to list *v1.Event: the server was unable to return a response in the time allotted, but may still be processing the request (get events)" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" type="*v1.Event" I1105 05:11:06.676647 1669 trace.go:236] Trace[926569410]: "Reflector ListAndWatch" name:k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290 (05-Nov-2025 05:10:38.649) (total time: 28027ms): Trace[926569410]: ---"Objects listed" error: 28024ms (05:11:06.673) Trace[926569410]: ---"Resource version extracted" 0ms (05:11:06.673) Trace[926569410]: ---"Objects extracted" 0ms (05:11:06.674) Trace[926569410]: ---"SyncWith done" 2ms (05:11:06.676) Trace[926569410]: ---"Resource version updated" 0ms (05:11:06.676) Trace[926569410]: [28.027092576s] [28.027092576s] END time="2025-11-05T05:11:06Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2363bb7230 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{ProbeError Liveness probe error: Get \"https://10.0.0.3:10259/healthz\": dial tcp 10.0.0.3:10259: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:06:54Z lastTimestamp:2025-11-05T05:11:04Z reason:ProbeError]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:fe27218063 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{Unhealthy Liveness probe failed: Get \"https://10.0.0.3:10259/healthz\": dial tcp 10.0.0.3:10259: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:06:54Z lastTimestamp:2025-11-05T05:11:04Z reason:Unhealthy]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:9b6201a7d7 namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:marketplace-operator-65754d8564-dptvk]}" message="{ProbeError Liveness probe error: Get \"http://10.131.2.32:8080/healthz\": dial tcp 10.131.2.32:8080: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:10:54Z lastTimestamp:2025-11-05T05:10:54Z reason:ProbeError]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:c64215c8f0 namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:marketplace-operator-65754d8564-dptvk]}" message="{Unhealthy Liveness probe failed: Get \"http://10.131.2.32:8080/healthz\": dial tcp 10.131.2.32:8080: connect: connection refused map[firstTimestamp:2025-11-05T05:10:54Z lastTimestamp:2025-11-05T05:10:54Z reason:Unhealthy]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:4ee35e80fc namespace:openshift-cluster-storage-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:csi-snapshot-controller-8c7f869b5-hfm7w]}" message="{BackOff Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-8c7f869b5-hfm7w_openshift-cluster-storage-operator(c5baa1a7-0c7e-4f42-82c9-ac9286af4074) map[count:2 firstTimestamp:2025-11-05T05:06:48Z lastTimestamp:2025-11-05T05:10:33Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:4ee35e80fc namespace:openshift-cluster-storage-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:csi-snapshot-controller-8c7f869b5-hfm7w]}" message="{BackOff Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-8c7f869b5-hfm7w_openshift-cluster-storage-operator(c5baa1a7-0c7e-4f42-82c9-ac9286af4074) map[count:3 firstTimestamp:2025-11-05T05:06:48Z lastTimestamp:2025-11-05T05:10:44Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:5ff6868c05 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{BackOff Back-off restarting failed container console-operator in pod console-operator-589679b99d-hksh7_openshift-console-operator(16a895c0-986d-469a-bd8d-90da83286b3a) map[count:6 firstTimestamp:2025-11-05T05:06:54Z lastTimestamp:2025-11-05T05:10:59Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:3ea80ef9e3 namespace:openshift-operator-controller node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:operator-controller-controller-manager-77d5cd444c-twc2v]}" message="{BackOff Back-off restarting failed container manager in pod operator-controller-controller-manager-77d5cd444c-twc2v_openshift-operator-controller(288b2cb2-a2ab-48c6-8cd0-a4c0d2007d86) map[firstTimestamp:2025-11-05T05:11:05Z lastTimestamp:2025-11-05T05:11:05Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:dafbd81ddc namespace:openshift-cluster-version node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:cluster-version-operator-f68b974b6-d9gc4]}" message="{BackOff Back-off restarting failed container cluster-version-operator in pod cluster-version-operator-f68b974b6-d9gc4_openshift-cluster-version(bd124fdb-c208-4b1a-829e-c54e64b838df) map[firstTimestamp:2025-11-05T05:11:05Z lastTimestamp:2025-11-05T05:11:05Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:0c29c69252 namespace:openshift-operator-lifecycle-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:package-server-manager-6cfb5fcd44-s6665]}" message="{BackOff Back-off restarting failed container package-server-manager in pod package-server-manager-6cfb5fcd44-s6665_openshift-operator-lifecycle-manager(9085ffc3-d685-4571-ad23-402898386b56) map[firstTimestamp:2025-11-05T05:11:05Z lastTimestamp:2025-11-05T05:11:05Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:f05fbd2e8e namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{BackOff Back-off restarting failed container authentication-operator in pod authentication-operator-7898ff465d-29vtv_openshift-authentication-operator(ac4870f8-58ad-445e-aae4-ec3e3c9db3b8) map[firstTimestamp:2025-11-05T05:11:00Z lastTimestamp:2025-11-05T05:11:00Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:f9b7b13437 namespace:openshift-machine-api node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:cluster-baremetal-operator-5f697474c6-h5nph]}" message="{BackOff Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-5f697474c6-h5nph_openshift-machine-api(cbf3021a-2688-46b0-bfd3-eee18c3154eb) map[count:2 firstTimestamp:2025-11-05T05:06:32Z lastTimestamp:2025-11-05T05:11:00Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:bab48a2339 namespace:openshift-cloud-network-config-controller node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:cloud-network-config-controller-594bb6bf45-57tjr]}" message="{BackOff Back-off restarting failed container controller in pod cloud-network-config-controller-594bb6bf45-57tjr_openshift-cloud-network-config-controller(23b74885-a455-4733-85fa-47020c37abd2) map[count:2 firstTimestamp:2025-11-05T05:06:31Z lastTimestamp:2025-11-05T05:11:00Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:aecbd9bd60 namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{BackOff Back-off restarting failed container manager in pod catalogd-controller-manager-66bcb68989-t6zbh_openshift-catalogd(31f5e71d-6925-4d89-9b5a-3d17a34fd35f) map[firstTimestamp:2025-11-05T05:11:04Z lastTimestamp:2025-11-05T05:11:04Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:65c2df760d namespace:openshift-route-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:route-controller-manager-595bb8d55f-b74br]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.12:8443/healthz\": context deadline exceeded map[firstTimestamp:2025-11-05T05:11:01Z lastTimestamp:2025-11-05T05:11:01Z reason:Unhealthy]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:ccf1df5bb7 namespace:openshift-cluster-storage-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:csi-snapshot-controller-operator-7bbb4b6f45-5fdd9]}" message="{BackOff Back-off restarting failed container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-7bbb4b6f45-5fdd9_openshift-cluster-storage-operator(9a5985b5-97c6-4e70-96d4-d9e15e9bc038) map[firstTimestamp:2025-11-05T05:11:01Z lastTimestamp:2025-11-05T05:11:01Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:a3435f1d55 namespace:openshift-controller-manager-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-controller-manager-operator-647f55d6d9-fpkf2]}" message="{BackOff Back-off restarting failed container openshift-controller-manager-operator in pod openshift-controller-manager-operator-647f55d6d9-fpkf2_openshift-controller-manager-operator(ebca4da2-68c6-4feb-81d4-85d24787d55b) map[firstTimestamp:2025-11-05T05:11:01Z lastTimestamp:2025-11-05T05:11:01Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:f05fbd2e8e namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{BackOff Back-off restarting failed container authentication-operator in pod authentication-operator-7898ff465d-29vtv_openshift-authentication-operator(ac4870f8-58ad-445e-aae4-ec3e3c9db3b8) map[count:2 firstTimestamp:2025-11-05T05:11:00Z lastTimestamp:2025-11-05T05:11:02Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:f16602d6a7 namespace:openshift-route-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:route-controller-manager-595bb8d55f-b74br]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nbody: \n map[firstTimestamp:2025-11-05T05:11:02Z lastTimestamp:2025-11-05T05:11:02Z reason:ProbeError]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:e6617c5a00 namespace:openshift-route-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:route-controller-manager-595bb8d55f-b74br]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers) map[firstTimestamp:2025-11-05T05:11:02Z lastTimestamp:2025-11-05T05:11:02Z reason:Unhealthy]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:a26683112f namespace:openshift-etcd-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-operator-858c88f488-zh8kh]}" message="{BackOff Back-off restarting failed container etcd-operator in pod etcd-operator-858c88f488-zh8kh_openshift-etcd-operator(6193584c-aca3-4b82-abb5-c97245572f7e) map[firstTimestamp:2025-11-05T05:11:02Z lastTimestamp:2025-11-05T05:11:02Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:a26683112f namespace:openshift-etcd-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-operator-858c88f488-zh8kh]}" message="{BackOff Back-off restarting failed container etcd-operator in pod etcd-operator-858c88f488-zh8kh_openshift-etcd-operator(6193584c-aca3-4b82-abb5-c97245572f7e) map[count:2 firstTimestamp:2025-11-05T05:11:02Z lastTimestamp:2025-11-05T05:11:03Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:695b92450f namespace:openshift-cluster-storage-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:cluster-storage-operator-5cbc46876-ptc9h]}" message="{BackOff Back-off restarting failed container cluster-storage-operator in pod cluster-storage-operator-5cbc46876-ptc9h_openshift-cluster-storage-operator(dd9b12e4-4655-434d-bfa5-304aa533c1d2) map[firstTimestamp:2025-11-05T05:11:04Z lastTimestamp:2025-11-05T05:11:04Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:bb669f7fad namespace:openshift-cluster-olm-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:cluster-olm-operator-6994c657c-55f9z]}" message="{BackOff Back-off restarting failed container cluster-olm-operator in pod cluster-olm-operator-6994c657c-55f9z_openshift-cluster-olm-operator(29b1c5cc-51bd-4933-beb6-04517aec5f2a) map[firstTimestamp:2025-11-05T05:11:04Z lastTimestamp:2025-11-05T05:11:04Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:4bd5a56620 namespace:openshift-kube-storage-version-migrator-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-storage-version-migrator-operator-74b4965bb8-lhh76]}" message="{BackOff Back-off restarting failed container kube-storage-version-migrator-operator in pod kube-storage-version-migrator-operator-74b4965bb8-lhh76_openshift-kube-storage-version-migrator-operator(57af62a5-8dc9-422c-acb6-be4d718a63b9) map[firstTimestamp:2025-11-05T05:11:05Z lastTimestamp:2025-11-05T05:11:05Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:066e5d7100 namespace:openshift-kube-apiserver-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-operator-fdffbc57b-6z87w]}" message="{BackOff Back-off restarting failed container kube-apiserver-operator in pod kube-apiserver-operator-fdffbc57b-6z87w_openshift-kube-apiserver-operator(514529b5-2523-424e-8b53-46ac19a8bfed) map[firstTimestamp:2025-11-05T05:11:05Z lastTimestamp:2025-11-05T05:11:05Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:eb36eb5dfa namespace:openshift-apiserver-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-apiserver-operator-65647564c4-szq84]}" message="{BackOff Back-off restarting failed container openshift-apiserver-operator in pod openshift-apiserver-operator-65647564c4-szq84_openshift-apiserver-operator(69f82724-4642-4e41-a59c-1583f28f9de2) map[firstTimestamp:2025-11-05T05:11:05Z lastTimestamp:2025-11-05T05:11:05Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:786c4a4b48 namespace:openshift-kube-scheduler-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-operator-5764b49dcd-2rh4j]}" message="{BackOff Back-off restarting failed container kube-scheduler-operator-container in pod openshift-kube-scheduler-operator-5764b49dcd-2rh4j_openshift-kube-scheduler-operator(3856c35f-a46c-4d9e-8eb6-0c539072018e) map[firstTimestamp:2025-11-05T05:11:05Z lastTimestamp:2025-11-05T05:11:05Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:179d425ecd namespace:openshift-service-ca-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:service-ca-operator-7cbc764c44-d8wts]}" message="{BackOff Back-off restarting failed container service-ca-operator in pod service-ca-operator-7cbc764c44-d8wts_openshift-service-ca-operator(f327475d-251b-405f-86b7-fc5957e568bb) map[firstTimestamp:2025-11-05T05:11:05Z lastTimestamp:2025-11-05T05:11:05Z reason:BackOff]}" time="2025-11-05T05:11:06Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:cf5ab05d51 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{BackOff Back-off restarting failed container kube-scheduler in pod openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1_openshift-kube-scheduler(a8fc36efaf1ce35b98cabcb08af498ee) map[firstTimestamp:2025-11-05T05:11:06Z lastTimestamp:2025-11-05T05:11:06Z reason:BackOff]}" time="2025-11-05T05:11:07Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:0c29c69252 namespace:openshift-operator-lifecycle-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:package-server-manager-6cfb5fcd44-s6665]}" message="{BackOff Back-off restarting failed container package-server-manager in pod package-server-manager-6cfb5fcd44-s6665_openshift-operator-lifecycle-manager(9085ffc3-d685-4571-ad23-402898386b56) map[count:2 firstTimestamp:2025-11-05T05:11:05Z lastTimestamp:2025-11-05T05:11:07Z reason:BackOff]}" time="2025-11-05T05:11:07Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:3ea80ef9e3 namespace:openshift-operator-controller node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:operator-controller-controller-manager-77d5cd444c-twc2v]}" message="{BackOff Back-off restarting failed container manager in pod operator-controller-controller-manager-77d5cd444c-twc2v_openshift-operator-controller(288b2cb2-a2ab-48c6-8cd0-a4c0d2007d86) map[count:2 firstTimestamp:2025-11-05T05:11:05Z lastTimestamp:2025-11-05T05:11:07Z reason:BackOff]}" time="2025-11-05T05:11:07Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:aecbd9bd60 namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{BackOff Back-off restarting failed container manager in pod catalogd-controller-manager-66bcb68989-t6zbh_openshift-catalogd(31f5e71d-6925-4d89-9b5a-3d17a34fd35f) map[count:2 firstTimestamp:2025-11-05T05:11:04Z lastTimestamp:2025-11-05T05:11:07Z reason:BackOff]}" time="2025-11-05T05:11:08Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:cf5ab05d51 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{BackOff Back-off restarting failed container kube-scheduler in pod openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1_openshift-kube-scheduler(a8fc36efaf1ce35b98cabcb08af498ee) map[count:2 firstTimestamp:2025-11-05T05:11:06Z lastTimestamp:2025-11-05T05:11:08Z reason:BackOff]}" time="2025-11-05T05:11:11Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:aecbd9bd60 namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{BackOff Back-off restarting failed container manager in pod catalogd-controller-manager-66bcb68989-t6zbh_openshift-catalogd(31f5e71d-6925-4d89-9b5a-3d17a34fd35f) map[count:3 firstTimestamp:2025-11-05T05:11:04Z lastTimestamp:2025-11-05T05:11:11Z reason:BackOff]}" time="2025-11-05T05:11:11Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:0c29c69252 namespace:openshift-operator-lifecycle-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:package-server-manager-6cfb5fcd44-s6665]}" message="{BackOff Back-off restarting failed container package-server-manager in pod package-server-manager-6cfb5fcd44-s6665_openshift-operator-lifecycle-manager(9085ffc3-d685-4571-ad23-402898386b56) map[count:3 firstTimestamp:2025-11-05T05:11:05Z lastTimestamp:2025-11-05T05:11:11Z reason:BackOff]}" time="2025-11-05T05:11:14Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:bab48a2339 namespace:openshift-cloud-network-config-controller node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:cloud-network-config-controller-594bb6bf45-57tjr]}" message="{BackOff Back-off restarting failed container controller in pod cloud-network-config-controller-594bb6bf45-57tjr_openshift-cloud-network-config-controller(23b74885-a455-4733-85fa-47020c37abd2) map[count:3 firstTimestamp:2025-11-05T05:06:31Z lastTimestamp:2025-11-05T05:11:14Z reason:BackOff]}" time="2025-11-05T05:11:14Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:f9b7b13437 namespace:openshift-machine-api node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:cluster-baremetal-operator-5f697474c6-h5nph]}" message="{BackOff Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-5f697474c6-h5nph_openshift-machine-api(cbf3021a-2688-46b0-bfd3-eee18c3154eb) map[count:3 firstTimestamp:2025-11-05T05:06:32Z lastTimestamp:2025-11-05T05:11:14Z reason:BackOff]}" time="2025-11-05T05:11:14Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:cf5ab05d51 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{BackOff Back-off restarting failed container kube-scheduler in pod openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1_openshift-kube-scheduler(a8fc36efaf1ce35b98cabcb08af498ee) map[count:3 firstTimestamp:2025-11-05T05:11:06Z lastTimestamp:2025-11-05T05:11:14Z reason:BackOff]}" time="2025-11-05T05:11:30Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-zqfrv]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:11:30Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-controller-manager pod:controller-manager-6848447799-9dq2c]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:11:30Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-apiserver pod:apiserver-6d96f44c85-pgsrg]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:11:30Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-oauth-apiserver pod:apiserver-8645679b75-zjp54]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:11:30Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-cts8l]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:11:35Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:d3e991580a namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1_openshift-etcd(1374c23603b9826b929123fe721a00ce) map[count:11 firstTimestamp:2025-11-05T05:05:35Z lastTimestamp:2025-11-05T05:11:35Z reason:BackOff]}" E1105 05:11:39.342091 1669 pod_ip_controller.go:75] "Unhandled Error" err=< invalid queue key '{openshift-etcd/revision-pruner-16-ci-op-x0f88pwp-f3da4-d9fgd-master-1 &Pod{ObjectMeta:{revision-pruner-16-ci-op-x0f88pwp-f3da4-d9fgd-master-1 openshift-etcd b52f7557-55e5-4d67-98d5-43e1fae1d55e 51334 1 2025-11-05 05:07:18 +0000 UTC map[app:pruner] map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.128.0.101/23"],"mac_address":"0a:58:0a:80:00:65","gateway_ips":["10.128.0.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.128.0.1"},{"dest":"172.30.0.0/16","nextHop":"10.128.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.128.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.128.0.1"}],"ip_address":"10.128.0.101/23","gateway_ip":"10.128.0.1","role":"primary"}} k8s.v1.cni.cncf.io/network-status:[{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.128.0.101" ], "mac": "0a:58:0a:80:00:65", "default": true, "dns": {} }]] [{v1 ConfigMap revision-status-16 eae2ee63-75f0-4010-9d9a-64613d911768 }] [] [{cluster-etcd-operator Update v1 2025-11-05 05:07:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eae2ee63-75f0-4010-9d9a-64613d911768\"}":{}}},"f:spec":{"f:automountServiceAccountToken":{},"f:containers":{"k:{\"name\":\"pruner\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:privileged":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeName":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{".":{},"f:runAsUser":{}},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"kube-api-access\"}":{".":{},"f:name":{},"f:projected":{".":{},"f:defaultMode":{},"f:sources":{}}},"k:{\"name\":\"kubelet-dir\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}} } {ci-op-x0f88pwp-f3da4-d9fgd-master-1 Update v1 2025-11-05 05:07:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.ovn.org/pod-networks":{}}}} status} {multus-daemon Update v1 2025-11-05 05:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2025-11-05 05:07:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{".":{},"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodReadyToStartContainers\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:hostIPs":{},"f:observedGeneration":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.0.101\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kubelet-dir,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:kube-api-access,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3600,Path:token,},ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},},Containers:[]Container{Container{Name:pruner,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:dc570b7d57ccf227c0776bb60dae4405b32b71bb143dc8279fbbf1c7e7a71f26,Command:[cluster-etcd-operator prune],Args:[-v=4 --max-eligible-revision=16 --protected-revisions=3,4,5,6,7,8,9,10,11,12,13,14,15,16 --resource-dir=/etc/kubernetes/static-pod-resources --cert-dir=etcd-certs --static-pod-name=etcd-pod],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Requests:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/etc/kubernetes/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},},RestartPolicy:Never,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:installer-sa,DeprecatedServiceAccount:installer-sa,NodeName:ci-op-x0f88pwp-f3da4-d9fgd-master-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,AppArmorProfile:nil,SupplementalGroupsPolicy:nil,SELinuxChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:installer-sa-dockercfg-hmj5s,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:*false,Tolerations:[]Toleration{Toleration{Key:,Operator:Exists,Value:,Effect:,TolerationSeconds:nil,},},HostAliases:[]HostAlias{},PriorityClassName:system-node-critical,Priority:*2000001000,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},Resources:nil,HostnameOverride:nil,},Status:PodStatus{Phase:Succeeded,Conditions:[]PodCondition{PodCondition{Type:PodReadyToStartContainers,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:07:30 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:07:22 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:1,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:07:22 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:1,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:07:22 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:1,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:07:22 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},},Message:,Reason:,HostIP:10.0.0.3,PodIP:10.128.0.101,StartTime:2025-11-05 05:07:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pruner,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-11-05 05:07:28 +0000 UTC,FinishedAt:2025-11-05 05:07:29 +0000 UTC,ContainerID:cri-o://0419ceaff7ae1502ce3ad9ea4de4b6d4e920a448ff0288ac9d72ada73cb5eccc,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:dc570b7d57ccf227c0776bb60dae4405b32b71bb143dc8279fbbf1c7e7a71f26,ImageID:quay-proxy.ci.openshift.org/openshift/ci@sha256:1a1a01258735184a4a2710257bf2a3a722a8d95c07908e5dfeba643426c4521b,ContainerID:cri-o://0419ceaff7ae1502ce3ad9ea4de4b6d4e920a448ff0288ac9d72ada73cb5eccc,Started:*false,AllocatedResources:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Resources:&ResourceRequirements{Limits:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Requests:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:kubelet-dir,MountPath:/etc/kubernetes/,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:kube-api-access,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:&ContainerUser{Linux:&LinuxContainerUser{UID:0,GID:0,SupplementalGroups:[0],},},AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},},QOSClass:Guaranteed,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.0.101,},},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,ResourceClaimStatuses:[]PodResourceClaimStatus{},HostIPs:[]HostIP{HostIP{IP:10.0.0.3,},},ObservedGeneration:1,ExtendedResourceClaimStatus:nil,},}}': object has no meta: object does not implement the Object interfaces > E1105 05:11:39.342500 1669 pod_ip_controller.go:75] "Unhandled Error" err=< invalid queue key '{openshift-etcd/revision-pruner-15-ci-op-x0f88pwp-f3da4-d9fgd-master-1 &Pod{ObjectMeta:{revision-pruner-15-ci-op-x0f88pwp-f3da4-d9fgd-master-1 openshift-etcd a0196cf2-b0b3-4d6e-b77e-a8629eb2e626 50216 1 2025-11-05 05:04:12 +0000 UTC map[app:pruner] map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.128.0.99/23"],"mac_address":"0a:58:0a:80:00:63","gateway_ips":["10.128.0.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.128.0.1"},{"dest":"172.30.0.0/16","nextHop":"10.128.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.128.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.128.0.1"}],"ip_address":"10.128.0.99/23","gateway_ip":"10.128.0.1","role":"primary"}} k8s.v1.cni.cncf.io/network-status:[{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.128.0.99" ], "mac": "0a:58:0a:80:00:63", "default": true, "dns": {} }]] [{v1 ConfigMap revision-status-15 4c11567e-6da9-4635-9259-efa79ed672a9 }] [] [{ci-op-x0f88pwp-f3da4-d9fgd-master-1 Update v1 2025-11-05 05:04:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.ovn.org/pod-networks":{}}}} status} {cluster-etcd-operator Update v1 2025-11-05 05:04:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c11567e-6da9-4635-9259-efa79ed672a9\"}":{}}},"f:spec":{"f:automountServiceAccountToken":{},"f:containers":{"k:{\"name\":\"pruner\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:privileged":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeName":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{".":{},"f:runAsUser":{}},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"kube-api-access\"}":{".":{},"f:name":{},"f:projected":{".":{},"f:defaultMode":{},"f:sources":{}}},"k:{\"name\":\"kubelet-dir\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}} } {multus-daemon Update v1 2025-11-05 05:04:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2025-11-05 05:06:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{".":{},"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodReadyToStartContainers\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:hostIPs":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.0.99\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kubelet-dir,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:kube-api-access,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3600,Path:token,},ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},},Containers:[]Container{Container{Name:pruner,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:dc570b7d57ccf227c0776bb60dae4405b32b71bb143dc8279fbbf1c7e7a71f26,Command:[cluster-etcd-operator prune],Args:[-v=4 --max-eligible-revision=15 --protected-revisions=3,4,5,6,7,8,9,10,11,12,13,14,15 --resource-dir=/etc/kubernetes/static-pod-resources --cert-dir=etcd-certs --static-pod-name=etcd-pod],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Requests:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/etc/kubernetes/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},},RestartPolicy:Never,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:installer-sa,DeprecatedServiceAccount:installer-sa,NodeName:ci-op-x0f88pwp-f3da4-d9fgd-master-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,AppArmorProfile:nil,SupplementalGroupsPolicy:nil,SELinuxChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:installer-sa-dockercfg-hmj5s,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:*false,Tolerations:[]Toleration{Toleration{Key:,Operator:Exists,Value:,Effect:,TolerationSeconds:nil,},},HostAliases:[]HostAlias{},PriorityClassName:system-node-critical,Priority:*2000001000,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},Resources:nil,HostnameOverride:nil,},Status:PodStatus{Phase:Succeeded,Conditions:[]PodCondition{PodCondition{Type:PodReadyToStartContainers,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:04:15 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:04:12 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:1,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:04:12 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:1,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:04:12 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:1,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:04:12 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},},Message:,Reason:,HostIP:10.0.0.3,PodIP:10.128.0.99,StartTime:2025-11-05 05:04:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pruner,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-11-05 05:04:13 +0000 UTC,FinishedAt:2025-11-05 05:04:13 +0000 UTC,ContainerID:cri-o://0073046fb3f0feca6260eec8216b2333f67f69bd8adfac449a241f8a1c91281e,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:dc570b7d57ccf227c0776bb60dae4405b32b71bb143dc8279fbbf1c7e7a71f26,ImageID:quay-proxy.ci.openshift.org/openshift/ci@sha256:1a1a01258735184a4a2710257bf2a3a722a8d95c07908e5dfeba643426c4521b,ContainerID:cri-o://0073046fb3f0feca6260eec8216b2333f67f69bd8adfac449a241f8a1c91281e,Started:*false,AllocatedResources:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Resources:&ResourceRequirements{Limits:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Requests:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:kubelet-dir,MountPath:/etc/kubernetes/,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:kube-api-access,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:&ContainerUser{Linux:&LinuxContainerUser{UID:0,GID:0,SupplementalGroups:[0],},},AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},},QOSClass:Guaranteed,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.0.99,},},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,ResourceClaimStatuses:[]PodResourceClaimStatus{},HostIPs:[]HostIP{HostIP{IP:10.0.0.3,},},ObservedGeneration:1,ExtendedResourceClaimStatus:nil,},}}': object has no meta: object does not implement the Object interfaces > E1105 05:11:39.343527 1669 pod_ip_controller.go:75] "Unhandled Error" err=< invalid queue key '{openshift-kube-controller-manager/revision-pruner-6-ci-op-x0f88pwp-f3da4-d9fgd-master-1 &Pod{ObjectMeta:{revision-pruner-6-ci-op-x0f88pwp-f3da4-d9fgd-master-1 openshift-kube-controller-manager 13574ba3-379f-437b-9675-1ca67baff500 51335 1 2025-11-05 05:07:01 +0000 UTC map[app:pruner] map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.128.0.102/23"],"mac_address":"0a:58:0a:80:00:66","gateway_ips":["10.128.0.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.128.0.1"},{"dest":"172.30.0.0/16","nextHop":"10.128.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.128.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.128.0.1"}],"ip_address":"10.128.0.102/23","gateway_ip":"10.128.0.1","role":"primary"}} k8s.v1.cni.cncf.io/network-status:[{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.128.0.102" ], "mac": "0a:58:0a:80:00:66", "default": true, "dns": {} }]] [{v1 ConfigMap revision-status-6 c150e33c-4311-4856-819f-25ed8dfdada7 }] [] [{cluster-kube-controller-manager-operator Update v1 2025-11-05 05:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c150e33c-4311-4856-819f-25ed8dfdada7\"}":{}}},"f:spec":{"f:automountServiceAccountToken":{},"f:containers":{"k:{\"name\":\"pruner\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:privileged":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeName":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{".":{},"f:runAsUser":{}},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"kube-api-access\"}":{".":{},"f:name":{},"f:projected":{".":{},"f:defaultMode":{},"f:sources":{}}},"k:{\"name\":\"kubelet-dir\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}} } {ci-op-x0f88pwp-f3da4-d9fgd-master-1 Update v1 2025-11-05 05:07:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.ovn.org/pod-networks":{}}}} status} {multus-daemon Update v1 2025-11-05 05:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2025-11-05 05:07:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{".":{},"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodReadyToStartContainers\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:hostIPs":{},"f:observedGeneration":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.0.102\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kubelet-dir,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:kube-api-access,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3600,Path:token,},ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},},Containers:[]Container{Container{Name:pruner,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:a9721c6e61db711562fbb0412bc477d4c31ed6cadb4fe49ecf0b06ccc3635543,Command:[cluster-kube-controller-manager-operator prune],Args:[-v=4 --max-eligible-revision=6 --protected-revisions=2,3,4,5,6 --resource-dir=/etc/kubernetes/static-pod-resources --cert-dir=kube-controller-manager-certs --static-pod-name=kube-controller-manager-pod],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Requests:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/etc/kubernetes/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},},RestartPolicy:Never,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:installer-sa,DeprecatedServiceAccount:installer-sa,NodeName:ci-op-x0f88pwp-f3da4-d9fgd-master-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,AppArmorProfile:nil,SupplementalGroupsPolicy:nil,SELinuxChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:installer-sa-dockercfg-8stsc,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:*false,Tolerations:[]Toleration{Toleration{Key:,Operator:Exists,Value:,Effect:,TolerationSeconds:nil,},},HostAliases:[]HostAlias{},PriorityClassName:system-node-critical,Priority:*2000001000,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},Resources:nil,HostnameOverride:nil,},Status:PodStatus{Phase:Succeeded,Conditions:[]PodCondition{PodCondition{Type:PodReadyToStartContainers,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:07:30 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:07:22 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:1,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:07:22 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:1,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:07:22 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:1,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:07:22 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},},Message:,Reason:,HostIP:10.0.0.3,PodIP:10.128.0.102,StartTime:2025-11-05 05:07:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pruner,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-11-05 05:07:28 +0000 UTC,FinishedAt:2025-11-05 05:07:29 +0000 UTC,ContainerID:cri-o://af48f437d566742a28edc50608fca8ef6cdfa71027f9888f5fa99944cbd99cde,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:a9721c6e61db711562fbb0412bc477d4c31ed6cadb4fe49ecf0b06ccc3635543,ImageID:quay-proxy.ci.openshift.org/openshift/ci@sha256:7eea4f957ee6f9cc54727b7c88a48818dc9da40b873cba217695d681d2cab86f,ContainerID:cri-o://af48f437d566742a28edc50608fca8ef6cdfa71027f9888f5fa99944cbd99cde,Started:*false,AllocatedResources:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Resources:&ResourceRequirements{Limits:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Requests:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:kubelet-dir,MountPath:/etc/kubernetes/,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:kube-api-access,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:&ContainerUser{Linux:&LinuxContainerUser{UID:0,GID:0,SupplementalGroups:[0],},},AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},},QOSClass:Guaranteed,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.0.102,},},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,ResourceClaimStatuses:[]PodResourceClaimStatus{},HostIPs:[]HostIP{HostIP{IP:10.0.0.3,},},ObservedGeneration:1,ExtendedResourceClaimStatus:nil,},}}': object has no meta: object does not implement the Object interfaces > E1105 05:11:39.343812 1669 pod_ip_controller.go:75] "Unhandled Error" err=< invalid queue key '{openshift-kube-apiserver/revision-pruner-8-ci-op-x0f88pwp-f3da4-d9fgd-master-1 &Pod{ObjectMeta:{revision-pruner-8-ci-op-x0f88pwp-f3da4-d9fgd-master-1 openshift-kube-apiserver be3306a4-20e8-405c-a52b-04cf231e97df 50201 1 2025-11-05 05:04:12 +0000 UTC map[app:pruner] map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.128.0.100/23"],"mac_address":"0a:58:0a:80:00:64","gateway_ips":["10.128.0.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.128.0.1"},{"dest":"172.30.0.0/16","nextHop":"10.128.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.128.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.128.0.1"}],"ip_address":"10.128.0.100/23","gateway_ip":"10.128.0.1","role":"primary"}} k8s.v1.cni.cncf.io/network-status:[{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.128.0.100" ], "mac": "0a:58:0a:80:00:64", "default": true, "dns": {} }]] [{v1 ConfigMap revision-status-8 cdd8abd9-ce1a-45d2-ae9a-e5c80c9ab755 }] [] [{ci-op-x0f88pwp-f3da4-d9fgd-master-1 Update v1 2025-11-05 05:04:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.ovn.org/pod-networks":{}}}} status} {cluster-kube-apiserver-operator Update v1 2025-11-05 05:04:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cdd8abd9-ce1a-45d2-ae9a-e5c80c9ab755\"}":{}}},"f:spec":{"f:automountServiceAccountToken":{},"f:containers":{"k:{\"name\":\"pruner\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:privileged":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeName":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{".":{},"f:runAsUser":{}},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"kube-api-access\"}":{".":{},"f:name":{},"f:projected":{".":{},"f:defaultMode":{},"f:sources":{}}},"k:{\"name\":\"kubelet-dir\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}} } {multus-daemon Update v1 2025-11-05 05:04:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2025-11-05 05:06:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{".":{},"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodReadyToStartContainers\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:hostIPs":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.0.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kubelet-dir,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:kube-api-access,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3600,Path:token,},ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},},Containers:[]Container{Container{Name:pruner,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:2b4a7094f94bb39adc6827f1d01aa1ef3734eff3d3f87d18b9a3641f111dae14,Command:[cluster-kube-apiserver-operator prune],Args:[-v=4 --max-eligible-revision=8 --protected-revisions=3,4,5,6,7,8 --resource-dir=/etc/kubernetes/static-pod-resources --cert-dir=kube-apiserver-certs --static-pod-name=kube-apiserver-pod],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Requests:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/etc/kubernetes/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},},RestartPolicy:Never,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:installer-sa,DeprecatedServiceAccount:installer-sa,NodeName:ci-op-x0f88pwp-f3da4-d9fgd-master-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,AppArmorProfile:nil,SupplementalGroupsPolicy:nil,SELinuxChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:installer-sa-dockercfg-8kzds,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:*false,Tolerations:[]Toleration{Toleration{Key:,Operator:Exists,Value:,Effect:,TolerationSeconds:nil,},},HostAliases:[]HostAlias{},PriorityClassName:system-node-critical,Priority:*2000001000,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},Resources:nil,HostnameOverride:nil,},Status:PodStatus{Phase:Succeeded,Conditions:[]PodCondition{PodCondition{Type:PodReadyToStartContainers,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:04:16 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:04:12 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:1,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:04:15 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:1,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:04:15 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:1,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:04:12 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},},Message:,Reason:,HostIP:10.0.0.3,PodIP:10.128.0.100,StartTime:2025-11-05 05:04:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pruner,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-11-05 05:04:13 +0000 UTC,FinishedAt:2025-11-05 05:04:14 +0000 UTC,ContainerID:cri-o://b5c7f684fe0b7596b2d20d322ed5657cacfd227bcd5dddbe9850586ca7dd0b03,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:2b4a7094f94bb39adc6827f1d01aa1ef3734eff3d3f87d18b9a3641f111dae14,ImageID:quay-proxy.ci.openshift.org/openshift/ci@sha256:2b4a7094f94bb39adc6827f1d01aa1ef3734eff3d3f87d18b9a3641f111dae14,ContainerID:cri-o://b5c7f684fe0b7596b2d20d322ed5657cacfd227bcd5dddbe9850586ca7dd0b03,Started:*false,AllocatedResources:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Resources:&ResourceRequirements{Limits:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Requests:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:kubelet-dir,MountPath:/etc/kubernetes/,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:kube-api-access,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:&ContainerUser{Linux:&LinuxContainerUser{UID:0,GID:0,SupplementalGroups:[0],},},AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},},QOSClass:Guaranteed,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.0.100,},},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,ResourceClaimStatuses:[]PodResourceClaimStatus{},HostIPs:[]HostIP{HostIP{IP:10.0.0.3,},},ObservedGeneration:1,ExtendedResourceClaimStatus:nil,},}}': object has no meta: object does not implement the Object interfaces > time="2025-11-05T05:12:07Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:39ed76cf4d namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{BackOff Back-off restarting failed container openshift-config-operator in pod openshift-config-operator-69bc6697c9-2bmrs_openshift-config-operator(b0d02a1b-126b-45d5-b681-d118b27812fc) map[count:21 firstTimestamp:2025-11-05T05:05:06Z lastTimestamp:2025-11-05T05:12:07Z reason:BackOff]}" I1105 05:12:34.609602 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:13:08Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-controller-manager pod:controller-manager-6848447799-9dq2c]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:13:08Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-apiserver pod:apiserver-6d96f44c85-pgsrg]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:13:08Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-zqfrv]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:13:08Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-oauth-apiserver pod:apiserver-8645679b75-zjp54]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:13:08Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-cts8l]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:13:08Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:09Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:10Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:11Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:12Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:13Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:14Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:15Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:16Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:17Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:18Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:19Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:20Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:21Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:22Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:23Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:24Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:25Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:26Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:27Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:28Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:29Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:30Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:31Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:32Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:33Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:34Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" I1105 05:13:34.878325 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:13:35Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:36Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:37Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:38Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:39Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:40Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:41Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:42Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:43Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:44Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:45Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:46Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:47Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:48Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:49Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:50Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:51Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:52Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:53Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:54Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:55Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:56Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:57Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:58Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:13:59Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:00Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-controller-manager pod:controller-manager-6848447799-9dq2c]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:14:00Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-apiserver pod:apiserver-6d96f44c85-pgsrg]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:14:00Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-oauth-apiserver pod:apiserver-8645679b75-zjp54]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:14:00Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-cts8l]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:14:00Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-zqfrv]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:14:00Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:01Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-ssscs]}" message="{NodeNotReady Node is not ready map[firstTimestamp:2025-11-05T05:14:01Z lastTimestamp:2025-11-05T05:14:01Z reason:NodeNotReady]}" time="2025-11-05T05:14:01Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:02Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:03Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:04Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:05Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:06Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:07Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:08Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:09Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:10Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:11Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:12Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:13Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:14Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:15Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:16Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:17Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:18Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:19Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: connect: connection refused" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:20Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:20Z reason:ProbeError]}" time="2025-11-05T05:14:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:20Z reason:Unhealthy]}" time="2025-11-05T05:14:25Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:2 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:25Z reason:ProbeError]}" time="2025-11-05T05:14:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:25Z reason:Unhealthy]}" time="2025-11-05T05:14:30Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:3 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:30Z reason:ProbeError]}" time="2025-11-05T05:14:30Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:30Z reason:Unhealthy]}" time="2025-11-05T05:14:30Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:4 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:30Z reason:ProbeError]}" time="2025-11-05T05:14:30Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:30Z reason:Unhealthy]}" time="2025-11-05T05:14:35Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:5 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:35Z reason:ProbeError]}" time="2025-11-05T05:14:35Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:35Z reason:Unhealthy]}" I1105 05:14:36.392896 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:14:40Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:6 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:40Z reason:ProbeError]}" time="2025-11-05T05:14:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:40Z reason:Unhealthy]}" time="2025-11-05T05:14:45Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:7 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:45Z reason:ProbeError]}" time="2025-11-05T05:14:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:45Z reason:Unhealthy]}" time="2025-11-05T05:14:45Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:14:46Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:14:47Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:db436a742b namespace:openshift-kube-controller-manager service:kube-controller-manager]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-controller-manager/kube-controller-manager: skipping Pod kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-controller-manager/kube-controller-manager: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:47Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:47Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c2bbb93f43 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:47Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:47Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c4f15468db namespace:openshift-kube-scheduler service:scheduler]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-scheduler/scheduler: skipping Pod openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-scheduler/scheduler: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:47Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:47Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:0279dd4c87 namespace:openshift-etcd service:etcd]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-etcd/etcd: skipping Pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-etcd/etcd: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:47Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:47Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:14:47Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:0279dd4c87 namespace:openshift-etcd service:etcd]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-etcd/etcd: skipping Pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-etcd/etcd: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:2 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:47Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:48Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c2bbb93f43 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:2 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:48Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:48Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:db436a742b namespace:openshift-kube-controller-manager service:kube-controller-manager]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-controller-manager/kube-controller-manager: skipping Pod kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-controller-manager/kube-controller-manager: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:2 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:48Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:48Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c4f15468db namespace:openshift-kube-scheduler service:scheduler]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-scheduler/scheduler: skipping Pod openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-scheduler/scheduler: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:2 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:48Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:48Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:0279dd4c87 namespace:openshift-etcd service:etcd]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-etcd/etcd: skipping Pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-etcd/etcd: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:3 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:48Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:48Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:14:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b4bf4cf7c-kqhd6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:14:48Z lastTimestamp:2025-11-05T05:14:48Z reason:Unhealthy]}" time="2025-11-05T05:14:49Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:46.443 STEP: Checking the control plane machine set exists @ 11/05/25 04:44:46.533 STEP: Checking the control plane machine set is active @ 11/05/25 04:44:46.566 STEP: Checking the control plane machine set is up to date @ 11/05/25 04:44:46.608 STEP: Waiting for the updated replicas to equal desired replicas @ 11/05/25 04:44:46.636 STEP: Updated replicas is now equal to desired replicas @ 11/05/25 04:44:46.656 STEP: Waiting for the replicas to equal desired replicas @ 11/05/25 04:44:46.656 STEP: Replicas is now equal to desired replicas @ 11/05/25 04:44:46.675 STEP: Ensuring the control plane machine set is deleted @ 11/05/25 04:44:46.675 STEP: Deleting the control plane machine set @ 11/05/25 04:44:46.693 STEP: Waiting for the deleted control plane machine set to be removed/recreated @ 11/05/25 04:44:46.726 STEP: Control plane machine set is now removed/recreated @ 11/05/25 04:44:47.222 STEP: Checking the control plane machine set is up to date @ 11/05/25 04:44:47.222 STEP: Waiting for the updated replicas to equal desired replicas @ 11/05/25 04:44:47.231 STEP: Updated replicas is now equal to desired replicas @ 11/05/25 05:01:57.209 STEP: Waiting for the replicas to equal desired replicas @ 11/05/25 05:01:57.209 STEP: Replicas is now equal to desired replicas @ 11/05/25 05:14:49.48 STEP: Checking the control plane machine set exists @ 11/05/25 05:14:49.494 STEP: Checking the control plane machine set is active @ 11/05/25 05:14:49.506 STEP: Checking the control plane machine set replicas are consistently up to date @ 11/05/25 05:14:49.523 [FAILED] in [It] - /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/cases.go:300 @ 11/05/25 05:14:49.523 STEP: Checking the control plane machine set exists @ 11/05/25 05:14:49.535 STEP: Checking the control plane machine set is active @ 11/05/25 05:14:49.551 fail [github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/cases.go:300]: test framework should not be nil Expected : nil not to be nil failed: (30m3s) 2025-11-05T05:14:49 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and the ControlPlaneMachineSet is up to date and the ControlPlaneMachineSet is deleted and the ControlPlaneMachineSet is reactivated should have the control plane machine set not cause a rollout" time="2025-11-05T05:14:50Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c2bbb93f43 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:3 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:50Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:50Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:db436a742b namespace:openshift-kube-controller-manager service:kube-controller-manager]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-controller-manager/kube-controller-manager: skipping Pod kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-controller-manager/kube-controller-manager: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:3 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:50Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:50Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c4f15468db namespace:openshift-kube-scheduler service:scheduler]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-scheduler/scheduler: skipping Pod openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-scheduler/scheduler: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:3 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:50Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:50Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:8 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:50Z reason:ProbeError]}" time="2025-11-05T05:14:50Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:14:50Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:50Z reason:Unhealthy]}" time="2025-11-05T05:14:50Z" level=error msg="pod logged an error: Get \"https://10.0.0.3:10250/containerLogs/openshift-etcd/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1/etcd?follow=true×tamps=true\": dial tcp 10.0.0.3:10250: i/o timeout" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:50Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-65f46c49b8-z45xg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:14:51Z lastTimestamp:2025-11-05T05:14:51Z reason:Unhealthy]}" time="2025-11-05T05:14:51Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:14:51Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:52Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:0279dd4c87 namespace:openshift-etcd service:etcd]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-etcd/etcd: skipping Pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-etcd/etcd: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:4 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:52Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:52Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:14:52Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b4bf4cf7c-kqhd6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:14:48Z lastTimestamp:2025-11-05T05:14:53Z reason:Unhealthy]}" time="2025-11-05T05:14:53Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:54Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c2bbb93f43 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:4 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:54Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:54Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:db436a742b namespace:openshift-kube-controller-manager service:kube-controller-manager]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-controller-manager/kube-controller-manager: skipping Pod kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-controller-manager/kube-controller-manager: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:4 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:54Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:54Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c4f15468db namespace:openshift-kube-scheduler service:scheduler]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-scheduler/scheduler: skipping Pod openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-scheduler/scheduler: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:4 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:14:54Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:14:54Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:4ceac36c5c namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1_openshift-etcd(640bebe7ba091455e1f24ac0637e5926) map[firstTimestamp:2025-11-05T05:14:54Z lastTimestamp:2025-11-05T05:14:54Z reason:BackOff]}" time="2025-11-05T05:14:54Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:55Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:4ceac36c5c namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1_openshift-etcd(640bebe7ba091455e1f24ac0637e5926) map[count:2 firstTimestamp:2025-11-05T05:14:54Z lastTimestamp:2025-11-05T05:14:55Z reason:BackOff]}" time="2025-11-05T05:14:55Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:9 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:55Z reason:ProbeError]}" time="2025-11-05T05:14:55Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:14:55Z reason:Unhealthy]}" time="2025-11-05T05:14:55Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:56Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-65f46c49b8-z45xg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:14:51Z lastTimestamp:2025-11-05T05:14:56Z reason:Unhealthy]}" time="2025-11-05T05:14:56Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:4ceac36c5c namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1_openshift-etcd(640bebe7ba091455e1f24ac0637e5926) map[count:3 firstTimestamp:2025-11-05T05:14:54Z lastTimestamp:2025-11-05T05:14:56Z reason:BackOff]}" time="2025-11-05T05:14:56Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:57Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:4ceac36c5c namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1_openshift-etcd(640bebe7ba091455e1f24ac0637e5926) map[count:4 firstTimestamp:2025-11-05T05:14:54Z lastTimestamp:2025-11-05T05:14:57Z reason:BackOff]}" time="2025-11-05T05:14:57Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b4bf4cf7c-kqhd6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:14:48Z lastTimestamp:2025-11-05T05:14:58Z reason:Unhealthy]}" time="2025-11-05T05:14:58Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:14:59Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:00Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:0279dd4c87 namespace:openshift-etcd service:etcd]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-etcd/etcd: skipping Pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-etcd/etcd: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:5 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:15:00Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:15:00Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:10 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:15:00Z reason:ProbeError]}" time="2025-11-05T05:15:00Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:15:00Z reason:Unhealthy]}" time="2025-11-05T05:15:00Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-65f46c49b8-z45xg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:14:51Z lastTimestamp:2025-11-05T05:15:01Z reason:Unhealthy]}" time="2025-11-05T05:15:01Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:02Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c2bbb93f43 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:5 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:15:02Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:15:02Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:db436a742b namespace:openshift-kube-controller-manager service:kube-controller-manager]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-controller-manager/kube-controller-manager: skipping Pod kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-controller-manager/kube-controller-manager: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:5 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:15:02Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:15:02Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c4f15468db namespace:openshift-kube-scheduler service:scheduler]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-scheduler/scheduler: skipping Pod openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-scheduler/scheduler: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:5 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:15:02Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:15:02Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:03Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b4bf4cf7c-kqhd6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:14:48Z lastTimestamp:2025-11-05T05:15:03Z reason:Unhealthy]}" time="2025-11-05T05:15:03Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:04Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:05Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:11 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:15:05Z reason:ProbeError]}" time="2025-11-05T05:15:05Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:11 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:15:05Z reason:Unhealthy]}" time="2025-11-05T05:15:05Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:06Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-65f46c49b8-z45xg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:14:51Z lastTimestamp:2025-11-05T05:15:06Z reason:Unhealthy]}" time="2025-11-05T05:15:06Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:07Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b4bf4cf7c-kqhd6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:14:48Z lastTimestamp:2025-11-05T05:15:08Z reason:Unhealthy]}" time="2025-11-05T05:15:08Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:09Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:10Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:12 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:15:10Z reason:ProbeError]}" time="2025-11-05T05:15:10Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:12 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:15:10Z reason:Unhealthy]}" time="2025-11-05T05:15:10Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-65f46c49b8-z45xg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:14:51Z lastTimestamp:2025-11-05T05:15:11Z reason:Unhealthy]}" time="2025-11-05T05:15:11Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:12Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:b43609d2bf namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[firstTimestamp:2025-11-05T05:15:12Z lastTimestamp:2025-11-05T05:15:12Z reason:ProbeError]}" time="2025-11-05T05:15:12Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:e9a40d76a6 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers) map[firstTimestamp:2025-11-05T05:15:12Z lastTimestamp:2025-11-05T05:15:12Z reason:Unhealthy]}" time="2025-11-05T05:15:12Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b4bf4cf7c-kqhd6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:14:48Z lastTimestamp:2025-11-05T05:15:13Z reason:Unhealthy]}" time="2025-11-05T05:15:13Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:14Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:15Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:13 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:15:15Z reason:ProbeError]}" time="2025-11-05T05:15:15Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:16Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-65f46c49b8-z45xg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:14:51Z lastTimestamp:2025-11-05T05:15:16Z reason:Unhealthy]}" time="2025-11-05T05:15:16Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:0279dd4c87 namespace:openshift-etcd service:etcd]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-etcd/etcd: skipping Pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-etcd/etcd: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:6 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:15:16Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:15:16Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:17Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:b43609d2bf namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:15:12Z lastTimestamp:2025-11-05T05:15:17Z reason:ProbeError]}" time="2025-11-05T05:15:17Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:e9a40d76a6 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers) map[count:2 firstTimestamp:2025-11-05T05:15:12Z lastTimestamp:2025-11-05T05:15:17Z reason:Unhealthy]}" time="2025-11-05T05:15:17Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:18Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:db436a742b namespace:openshift-kube-controller-manager service:kube-controller-manager]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-controller-manager/kube-controller-manager: skipping Pod kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-controller-manager/kube-controller-manager: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:6 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:15:18Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:15:18Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c4f15468db namespace:openshift-kube-scheduler service:scheduler]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-scheduler/scheduler: skipping Pod openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-scheduler/scheduler: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:6 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:15:18Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:15:18Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c2bbb93f43 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:6 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:15:18Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:15:18Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b4bf4cf7c-kqhd6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:14:48Z lastTimestamp:2025-11-05T05:15:18Z reason:Unhealthy]}" time="2025-11-05T05:15:18Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:19Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:20Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-65f46c49b8-z45xg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:14:51Z lastTimestamp:2025-11-05T05:15:21Z reason:Unhealthy]}" time="2025-11-05T05:15:21Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:22Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:f9c18c043b namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": context deadline exceeded\nbody: \n map[firstTimestamp:2025-11-05T05:15:22Z lastTimestamp:2025-11-05T05:15:22Z reason:ProbeError]}" time="2025-11-05T05:15:22Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:53b0411d30 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": context deadline exceeded map[firstTimestamp:2025-11-05T05:15:22Z lastTimestamp:2025-11-05T05:15:22Z reason:Unhealthy]}" time="2025-11-05T05:15:22Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:23Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b4bf4cf7c-kqhd6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:14:48Z lastTimestamp:2025-11-05T05:15:23Z reason:Unhealthy]}" time="2025-11-05T05:15:23Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:24Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:25Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:26Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-65f46c49b8-z45xg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:14:51Z lastTimestamp:2025-11-05T05:15:26Z reason:Unhealthy]}" time="2025-11-05T05:15:26Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:27Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:b43609d2bf namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:15:12Z lastTimestamp:2025-11-05T05:15:27Z reason:ProbeError]}" time="2025-11-05T05:15:27Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:e9a40d76a6 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers) map[count:3 firstTimestamp:2025-11-05T05:15:12Z lastTimestamp:2025-11-05T05:15:27Z reason:Unhealthy]}" time="2025-11-05T05:15:27Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b4bf4cf7c-kqhd6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:14:48Z lastTimestamp:2025-11-05T05:15:28Z reason:Unhealthy]}" time="2025-11-05T05:15:28Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:29Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:30Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:31Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-65f46c49b8-z45xg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:14:51Z lastTimestamp:2025-11-05T05:15:31Z reason:Unhealthy]}" time="2025-11-05T05:15:31Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:32Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:03651b1d66 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nbody: \n map[firstTimestamp:2025-11-05T05:15:32Z lastTimestamp:2025-11-05T05:15:32Z reason:ProbeError]}" time="2025-11-05T05:15:32Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:633685d6c2 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers) map[firstTimestamp:2025-11-05T05:15:32Z lastTimestamp:2025-11-05T05:15:32Z reason:Unhealthy]}" time="2025-11-05T05:15:32Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b4bf4cf7c-kqhd6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:14:48Z lastTimestamp:2025-11-05T05:15:33Z reason:Unhealthy]}" time="2025-11-05T05:15:33Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:34Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c2bbb93f43 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:7 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:15:34Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:15:34Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c2bbb93f43 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:8 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:15:34Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:15:34Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c2bbb93f43 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:9 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:15:34Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:15:34Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:35Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c2bbb93f43 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:10 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:15:35Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:15:35Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:c2bbb93f43 namespace:openshift-kube-apiserver service:apiserver]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-kube-apiserver/apiserver: skipping Pod kube-apiserver-ci-op-x0f88pwp-f3da4-d9fgd-master-1 for Service openshift-kube-apiserver/apiserver: Node ci-op-x0f88pwp-f3da4-d9fgd-master-1 Not Found map[count:11 firstTimestamp:2025-11-05T05:14:47Z lastTimestamp:2025-11-05T05:15:35Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T05:15:35Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:36Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-65f46c49b8-z45xg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:14:51Z lastTimestamp:2025-11-05T05:15:36Z reason:Unhealthy]}" I1105 05:15:36.658055 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:15:36Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:37Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:b43609d2bf namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:4 firstTimestamp:2025-11-05T05:15:12Z lastTimestamp:2025-11-05T05:15:37Z reason:ProbeError]}" time="2025-11-05T05:15:37Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:e9a40d76a6 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers) map[count:4 firstTimestamp:2025-11-05T05:15:12Z lastTimestamp:2025-11-05T05:15:37Z reason:Unhealthy]}" time="2025-11-05T05:15:37Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:38Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:40f134ab79 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b4bf4cf7c-kqhd6]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.87:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.87:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:15:38Z lastTimestamp:2025-11-05T05:15:38Z reason:ProbeError]}" time="2025-11-05T05:15:38Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e87f55727f namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b4bf4cf7c-kqhd6]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.87:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.87:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:15:38Z lastTimestamp:2025-11-05T05:15:38Z reason:Unhealthy]}" time="2025-11-05T05:15:38Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:39Z" level=error msg="pod logged an error: pods \"ci-op-x0f88pwp-f3da4-d9fgd-master-1\" not found" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-1 uid/3823639e-7df9-4d20-9abd-e2fa692d3034 container/etcd mirror-uid/1374c23603b9826b929123fe721a00ce" time="2025-11-05T05:15:41Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5fef41cb4d namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-65f46c49b8-z45xg]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.44:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.44:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:15:41Z lastTimestamp:2025-11-05T05:15:41Z reason:ProbeError]}" time="2025-11-05T05:15:41Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:49df798749 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-65f46c49b8-z45xg]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.44:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.44:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:15:41Z lastTimestamp:2025-11-05T05:15:41Z reason:Unhealthy]}" time="2025-11-05T05:15:45Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-8645679b75-vrkrh]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:15:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5fef41cb4d namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-65f46c49b8-z45xg]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.44:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.44:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:15:41Z lastTimestamp:2025-11-05T05:15:46Z reason:ProbeError]}" time="2025-11-05T05:15:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:49df798749 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-65f46c49b8-z45xg]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.44:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.44:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:15:41Z lastTimestamp:2025-11-05T05:15:46Z reason:Unhealthy]}" time="2025-11-05T05:15:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b4bf4cf7c-nr8qt]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:15:48Z lastTimestamp:2025-11-05T05:15:48Z reason:Unhealthy]}" time="2025-11-05T05:15:49Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:15:49Z lastTimestamp:2025-11-05T05:15:49Z reason:ProbeError]}" time="2025-11-05T05:15:49Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[firstTimestamp:2025-11-05T05:15:49Z lastTimestamp:2025-11-05T05:15:49Z reason:Unhealthy]}" time="2025-11-05T05:15:51Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:15:51Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5fef41cb4d namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-65f46c49b8-z45xg]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.44:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.44:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:15:41Z lastTimestamp:2025-11-05T05:15:51Z reason:ProbeError]}" time="2025-11-05T05:15:51Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:15:52Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:15:52Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:15:52Z reason:ProbeError]}" time="2025-11-05T05:15:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:15:52Z reason:Unhealthy]}" time="2025-11-05T05:15:53Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:15:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b4bf4cf7c-nr8qt]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:15:48Z lastTimestamp:2025-11-05T05:15:53Z reason:Unhealthy]}" time="2025-11-05T05:15:54Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:15:55Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:15:56Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:15:57Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-78bc654c8b-mj7sv]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:15:57Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:15:57Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:15:57Z reason:ProbeError]}" time="2025-11-05T05:15:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:15:57Z reason:Unhealthy]}" time="2025-11-05T05:15:58Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:15:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b4bf4cf7c-nr8qt]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:15:48Z lastTimestamp:2025-11-05T05:15:58Z reason:Unhealthy]}" time="2025-11-05T05:15:59Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:00Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:01Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:02Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:02Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:16:02Z reason:ProbeError]}" time="2025-11-05T05:16:02Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:3 firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:16:02Z reason:Unhealthy]}" time="2025-11-05T05:16:02Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:16:02Z reason:ProbeError]}" time="2025-11-05T05:16:02Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:4 firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:16:02Z reason:Unhealthy]}" time="2025-11-05T05:16:03Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:03Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b4bf4cf7c-nr8qt]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:15:48Z lastTimestamp:2025-11-05T05:16:03Z reason:Unhealthy]}" time="2025-11-05T05:16:04Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:05Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:06Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:07Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:07Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:16:07Z reason:ProbeError]}" time="2025-11-05T05:16:07Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:5 firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:16:07Z reason:Unhealthy]}" time="2025-11-05T05:16:08Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b4bf4cf7c-nr8qt]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:15:48Z lastTimestamp:2025-11-05T05:16:08Z reason:Unhealthy]}" time="2025-11-05T05:16:09Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:10Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:11Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:12Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:12Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:16:12Z reason:ProbeError]}" time="2025-11-05T05:16:12Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:6 firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:16:12Z reason:Unhealthy]}" time="2025-11-05T05:16:13Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b4bf4cf7c-nr8qt]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:15:48Z lastTimestamp:2025-11-05T05:16:13Z reason:Unhealthy]}" time="2025-11-05T05:16:14Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:15Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:16Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:17Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:17Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:7 firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:16:17Z reason:ProbeError]}" time="2025-11-05T05:16:18Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:18Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b4bf4cf7c-nr8qt]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:15:48Z lastTimestamp:2025-11-05T05:16:18Z reason:Unhealthy]}" time="2025-11-05T05:16:19Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:20Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:21Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:22Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:22Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-697848cdf6-k6vtb]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:16:23Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:23Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b4bf4cf7c-nr8qt]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:15:48Z lastTimestamp:2025-11-05T05:16:23Z reason:Unhealthy]}" time="2025-11-05T05:16:24Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:25Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:26Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:27Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:28Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b4bf4cf7c-nr8qt]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:15:48Z lastTimestamp:2025-11-05T05:16:28Z reason:Unhealthy]}" time="2025-11-05T05:16:29Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:30Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4f68cee8-c414-435d-8d78-de0687ed1995 container/etcd mirror-uid/640bebe7ba091455e1f24ac0637e5926" time="2025-11-05T05:16:31Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:16:32Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:16:33Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:16:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b4bf4cf7c-nr8qt]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:15:48Z lastTimestamp:2025-11-05T05:16:33Z reason:Unhealthy]}" time="2025-11-05T05:16:34Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:16:35Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:16:36Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" I1105 05:16:36.916404 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:16:43Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-78bc654c8b-6rxkx]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:16:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b4bf4cf7c-vnccd]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:16:48Z lastTimestamp:2025-11-05T05:16:48Z reason:Unhealthy]}" time="2025-11-05T05:16:49Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-697848cdf6-lrhfr]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 10m0s, polling interval: 10s) @ 11/05/25 04:44:47.134 STEP: Checking the control plane machine set exists @ 11/05/25 04:45:47.311 STEP: Activating the control plane machine set @ 11/05/25 04:45:47.322 STEP: Checking the control plane machine set is active @ 11/05/25 04:45:47.373 STEP: Checking the control plane machine set is up to date @ 11/05/25 04:45:47.382 STEP: Waiting for the updated replicas to equal desired replicas @ 11/05/25 04:45:47.392 STEP: Updated replicas is now equal to desired replicas @ 11/05/25 05:01:57.221 STEP: Waiting for the replicas to equal desired replicas @ 11/05/25 05:01:57.221 STEP: Replicas is now equal to desired replicas @ 11/05/25 05:14:49.499 STEP: Ensuring the control plane machine set is deleted @ 11/05/25 05:14:49.499 STEP: Deleting the control plane machine set @ 11/05/25 05:14:49.51 STEP: Waiting for the deleted control plane machine set to be removed/recreated @ 11/05/25 05:14:49.533 STEP: Control plane machine set is now removed/recreated @ 11/05/25 05:14:49.778 STEP: Checking the control plane machine set is up to date @ 11/05/25 05:14:49.778 STEP: Waiting for the updated replicas to equal desired replicas @ 11/05/25 05:14:49.786 STEP: Updated replicas is now equal to desired replicas @ 11/05/25 05:14:49.92 STEP: Waiting for the replicas to equal desired replicas @ 11/05/25 05:14:49.92 STEP: Replicas is now equal to desired replicas @ 11/05/25 05:14:49.929 STEP: Checking the control plane machine set exists @ 11/05/25 05:14:49.94 STEP: Activating the control plane machine set @ 11/05/25 05:14:49.951 STEP: Checking the control plane machine set is active @ 11/05/25 05:14:50.014 STEP: Checking that all of the control plane machines have owner references @ 11/05/25 05:14:50.024 STEP: Checking that none of the control plane machines have a deletion timestamp @ 11/05/25 05:14:50.15 STEP: Waiting for the cluster operators to stabilise (minimum availability time: 1m0s, timeout: 2m0s, polling interval: 10s) @ 11/05/25 05:14:50.286 [FAILED] in [It] - /go/src/github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44 @ 11/05/25 05:16:50.288 STEP: Checking the control plane machine set exists @ 11/05/25 05:16:50.301 STEP: Checking the control plane machine set is active @ 11/05/25 05:16:50.311 fail [github.com/openshift/cluster-control-plane-machine-set-operator/test/e2e/helpers/clusteroperators.go:44]: Timed out after 120.000s. cluster operators should all be available, not progressing and not degraded Value for field 'Items' failed to satisfy matcher. Expected <[]v1.ClusterOperator | len:34, cap:65>: : { Message: "Cluster operators [authentication etcd kube-apiserver openshift-apiserver] are either not available, are progressing or are degraded.", ClusterOperators: [ { Name: "authentication", Conditions: [ { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T04:24:22Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T05:07:07Z, }, Reason: "APIServerDeployment_PodsUpdating", Message: "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:32:42Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "Upgradeable", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "AsExpected", Message: "All is well", }, { Type: "EvaluationConditionsDetected", Status: "Unknown", LastTransitionTime: { Time: 2025-11-05T04:03:03Z, }, Reason: "NoData", Message: "", }, ], }, { Name: "etcd", Conditions: [ { Type: "Degraded", Status: "False", LastTransitionTime: { Time: 2025-11-05T05:15:09Z, }, Reason: "AsExpected", Message: "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 is unhealthy", }, { Type: "Progressing", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:50:13Z, }, Reason: "NodeInstaller", Message: "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 10; 1 node is at revision 16; 0 nodes have achieved new revision 19", }, { Type: "Available", Status: "True", LastTransitionTime: { Time: 2025-11-05T04:13:04Z, }, Reason: "AsExpected", Message: "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 10; 1 node is at revision 16; 0 nodes have achieved new revision 19\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 is unhealthy", }, { Type: "Upgradeable", Status: "True... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output to contain element matching <*matchers.HaveFieldMatcher | 0xc000701e00>: { Field: "Status.Conditions", Expected: <*matchers.AndMatcher | 0xc00089af60>{ Matchers: [ <*matchers.ContainElementMatcher | 0xc00089a810>{ Element: <*matchers.AndMatcher | 0xc00089a7e0>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc000701ce0>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc00075d270>{ Expected: "Available", }, }, <*matchers.HaveFieldMatcher | 0xc000701d00>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc00075d280>{ Expected: "True", }, }, <*matchers.HaveFieldMatcher | 0xc000701d20>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc0008970c0>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc00089a780>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: 2638621, }, }, ], firstFailedMatcher: nil, }, Result: nil, }, <*matchers.ContainElementMatcher | 0xc00089ab70>{ Element: <*matchers.AndMatcher | 0xc00089ab40>{ Matchers: [ <*matchers.HaveFieldMatcher | 0xc000701d40>{ Field: "Type", Expected: <*matchers.EqualMatcher | 0xc00075d2a0>{ Expected: "Progressing", }, }, <*matchers.HaveFieldMatcher | 0xc000701d60>{ Field: "Status", Expected: <*matchers.EqualMatcher | 0xc00075d2b0>{ Expected: "False", }, }, <*matchers.HaveFieldMatcher | 0xc000701d80>{ Field: "LastTransitionTime.Time", Expected: <*matchers.WithTransformMatcher | 0xc000897100>{ Transform: 0x1968140, Matcher: <*matchers.BeNumericallyMatcher | 0xc00089a840>{Comparator: ">", CompareTo: [...]}, transformArgType: <*reflect.rtype | 0x1e11b80>{ t: {Size_: ..., PtrBytes: ..., Hash: ..., TFlag: ..., Align_: ..., FieldAlign_: ..., Kind_: ..., Equal: ..., GCData: ..., Str: ..., PtrToThis: ...}, }, transformedValue: nil, }, }, ], firstFailedMatcher: <*matchers.HaveFieldMatcher | 0xc000701d40>{ Field: "Type", Expected: <*matchers.EqualM... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output failed: (32m3s) 2025-11-05T05:16:50 "ControlPlaneMachineSet Operator With an active ControlPlaneMachineSet and the ControlPlaneMachineSet is up to date and the ControlPlaneMachineSet is deleted and the ControlPlaneMachineSet is reactivated should find all control plane machines to have owner references set" started: 22/23/55 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:PinnedImages][Disruptive] Invalid PIS leads to degraded MCN in a standard Pool [apigroup:machineconfiguration.openshift.io] [Serial]" time="2025-11-05T05:16:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-65f46c49b8-xqnn6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:16:52Z lastTimestamp:2025-11-05T05:16:52Z reason:Unhealthy]}" time="2025-11-05T05:16:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b4bf4cf7c-vnccd]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:16:48Z lastTimestamp:2025-11-05T05:16:53Z reason:Unhealthy]}" time="2025-11-05T05:16:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-65f46c49b8-xqnn6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:16:52Z lastTimestamp:2025-11-05T05:16:57Z reason:Unhealthy]}" time="2025-11-05T05:16:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b4bf4cf7c-vnccd]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:16:48Z lastTimestamp:2025-11-05T05:16:58Z reason:Unhealthy]}" time="2025-11-05T05:17:02Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-65f46c49b8-xqnn6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:16:52Z lastTimestamp:2025-11-05T05:17:02Z reason:Unhealthy]}" time="2025-11-05T05:17:03Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b4bf4cf7c-vnccd]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:16:48Z lastTimestamp:2025-11-05T05:17:03Z reason:Unhealthy]}" passed: (10.5s) 2025-11-05T05:17:05 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:PinnedImages][Disruptive] Invalid PIS leads to degraded MCN in a standard Pool [apigroup:machineconfiguration.openshift.io] [Serial]" started: 22/24/55 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:ManagedBootImagesvSphere][Serial] Should upload the latest bootimage to the appropriate vCentre [apigroup:machineconfiguration.openshift.io]" time="2025-11-05T05:17:07Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-65f46c49b8-xqnn6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:16:52Z lastTimestamp:2025-11-05T05:17:07Z reason:Unhealthy]}" time="2025-11-05T05:17:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b4bf4cf7c-vnccd]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:16:48Z lastTimestamp:2025-11-05T05:17:08Z reason:Unhealthy]}" skip [github.com/openshift/origin/test/extended/machine_config/helpers.go:56]: This test only applies to VSphere platform skipped: (5s) 2025-11-05T05:17:10 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:ManagedBootImagesvSphere][Serial] Should upload the latest bootimage to the appropriate vCentre [apigroup:machineconfiguration.openshift.io]" started: 22/25/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO password PolarionID:59426-ssh keys can be updated in new dir on RHCOS9 node" time="2025-11-05T05:17:12Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-65f46c49b8-xqnn6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:16:52Z lastTimestamp:2025-11-05T05:17:12Z reason:Unhealthy]}" time="2025-11-05T05:17:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b4bf4cf7c-vnccd]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:16:48Z lastTimestamp:2025-11-05T05:17:13Z reason:Unhealthy]}" time="2025-11-05T05:17:17Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-65f46c49b8-xqnn6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:16:52Z lastTimestamp:2025-11-05T05:17:17Z reason:Unhealthy]}" time="2025-11-05T05:17:18Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b4bf4cf7c-vnccd]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:16:48Z lastTimestamp:2025-11-05T05:17:18Z reason:Unhealthy]}" time="2025-11-05T05:17:19Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:17:19Z lastTimestamp:2025-11-05T05:17:19Z reason:ProbeError]}" time="2025-11-05T05:17:19Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[firstTimestamp:2025-11-05T05:17:19Z lastTimestamp:2025-11-05T05:17:19Z reason:Unhealthy]}" time="2025-11-05T05:17:20Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:21Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:22Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-65f46c49b8-xqnn6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:16:52Z lastTimestamp:2025-11-05T05:17:22Z reason:Unhealthy]}" time="2025-11-05T05:17:22Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:23Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b4bf4cf7c-vnccd]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:16:48Z lastTimestamp:2025-11-05T05:17:23Z reason:Unhealthy]}" time="2025-11-05T05:17:23Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:24Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:25Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:26Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-65f46c49b8-xqnn6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:16:52Z lastTimestamp:2025-11-05T05:17:27Z reason:Unhealthy]}" time="2025-11-05T05:17:27Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b4bf4cf7c-vnccd]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:16:48Z lastTimestamp:2025-11-05T05:17:28Z reason:Unhealthy]}" time="2025-11-05T05:17:28Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:29Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:30Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:31Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-65f46c49b8-xqnn6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:16:52Z lastTimestamp:2025-11-05T05:17:32Z reason:Unhealthy]}" time="2025-11-05T05:17:32Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b4bf4cf7c-vnccd]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:16:48Z lastTimestamp:2025-11-05T05:17:33Z reason:Unhealthy]}" time="2025-11-05T05:17:33Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:34Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:35Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:36Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" I1105 05:17:37.199230 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:17:37Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-65f46c49b8-xqnn6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:16:52Z lastTimestamp:2025-11-05T05:17:37Z reason:Unhealthy]}" time="2025-11-05T05:17:37Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:37Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-697848cdf6-lrhfr]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:17:37Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-zqfrv]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:17:37Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-controller-manager pod:controller-manager-6848447799-9dq2c]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:17:37Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-697848cdf6-lrhfr]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:17:37Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:46 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T05:17:37Z reason:ProbeError]}" time="2025-11-05T05:17:37Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:79 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T05:17:37Z reason:Unhealthy]}" time="2025-11-05T05:17:37Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:377387ba7a machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-35c05bcaebfd750f96196069545bbb54 map[firstTimestamp:2025-11-05T05:17:37Z lastTimestamp:2025-11-05T05:17:37Z reason:SetDesiredConfig]}" time="2025-11-05T05:17:38Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:39Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:40Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:41Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:42Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:13b05f7b0d namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-65f46c49b8-xqnn6]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.90:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.90:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:17:42Z lastTimestamp:2025-11-05T05:17:42Z reason:ProbeError]}" time="2025-11-05T05:17:42Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1f91884245 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-65f46c49b8-xqnn6]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.90:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.90:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:17:42Z lastTimestamp:2025-11-05T05:17:42Z reason:Unhealthy]}" time="2025-11-05T05:17:42Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:42Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-zqfrv]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:17:42Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-controller-manager pod:controller-manager-6848447799-9dq2c]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:17:42Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:47 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T05:17:42Z reason:ProbeError]}" time="2025-11-05T05:17:42Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-78bc654c8b-8g6lj]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:17:43Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:44Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:45Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-zjp54]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:17:45Z lastTimestamp:2025-11-05T05:17:45Z reason:Unhealthy]}" time="2025-11-05T05:17:46Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:47Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:13b05f7b0d namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-65f46c49b8-xqnn6]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.90:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.90:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:17:42Z lastTimestamp:2025-11-05T05:17:47Z reason:ProbeError]}" time="2025-11-05T05:17:47Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1f91884245 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-65f46c49b8-xqnn6]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.90:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.90:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:17:42Z lastTimestamp:2025-11-05T05:17:47Z reason:Unhealthy]}" time="2025-11-05T05:17:47Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:48Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:49Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:50Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:50Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-zjp54]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:17:45Z lastTimestamp:2025-11-05T05:17:50Z reason:Unhealthy]}" time="2025-11-05T05:17:51Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:52Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:13b05f7b0d namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-65f46c49b8-xqnn6]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.90:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.90:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:17:42Z lastTimestamp:2025-11-05T05:17:52Z reason:ProbeError]}" time="2025-11-05T05:17:52Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:53Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:54Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:55Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/9f80dec1-a9c5-4662-b1a1-d65fdd1c4715 container/etcd mirror-uid/62f503697d86112448061a10be31b43d" time="2025-11-05T05:17:55Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-zjp54]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:17:45Z lastTimestamp:2025-11-05T05:17:55Z reason:Unhealthy]}" time="2025-11-05T05:17:56Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:17:57Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:17:58Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:17:59Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:18:00Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:18:00Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-zjp54]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:17:45Z lastTimestamp:2025-11-05T05:18:00Z reason:Unhealthy]}" time="2025-11-05T05:18:01Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:18:05Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-zjp54]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:17:45Z lastTimestamp:2025-11-05T05:18:05Z reason:Unhealthy]}" time="2025-11-05T05:18:05Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:f79e87a7c5 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt to MachineConfig: rendered-worker-35c05bcaebfd750f96196069545bbb54 map[firstTimestamp:2025-11-05T05:18:05Z lastTimestamp:2025-11-05T05:18:05Z reason:SetDesiredConfig]}" time="2025-11-05T05:18:10Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-zjp54]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:17:45Z lastTimestamp:2025-11-05T05:18:10Z reason:Unhealthy]}" time="2025-11-05T05:18:10Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-78bc654c8b-8g6lj]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:18:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-zjp54]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:17:45Z lastTimestamp:2025-11-05T05:18:15Z reason:Unhealthy]}" time="2025-11-05T05:18:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-zjp54]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:17:45Z lastTimestamp:2025-11-05T05:18:20Z reason:Unhealthy]}" time="2025-11-05T05:18:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-zjp54]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:17:45Z lastTimestamp:2025-11-05T05:18:25Z reason:Unhealthy]}" time="2025-11-05T05:18:30Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-zjp54]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:17:45Z lastTimestamp:2025-11-05T05:18:30Z reason:Unhealthy]}" time="2025-11-05T05:18:36Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:8094f990e1 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr to MachineConfig: rendered-worker-35c05bcaebfd750f96196069545bbb54 map[firstTimestamp:2025-11-05T05:18:36Z lastTimestamp:2025-11-05T05:18:36Z reason:SetDesiredConfig]}" I1105 05:18:37.455448 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:18:50Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-697848cdf6-plsnz]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:18:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-65f46c49b8-4frl5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:18:53Z lastTimestamp:2025-11-05T05:18:53Z reason:Unhealthy]}" time="2025-11-05T05:18:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-65f46c49b8-4frl5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:18:53Z lastTimestamp:2025-11-05T05:18:58Z reason:Unhealthy]}" time="2025-11-05T05:19:03Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-65f46c49b8-4frl5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:18:53Z lastTimestamp:2025-11-05T05:19:03Z reason:Unhealthy]}" time="2025-11-05T05:19:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-65f46c49b8-4frl5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:18:53Z lastTimestamp:2025-11-05T05:19:08Z reason:Unhealthy]}" time="2025-11-05T05:19:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-65f46c49b8-4frl5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:18:53Z lastTimestamp:2025-11-05T05:19:13Z reason:Unhealthy]}" time="2025-11-05T05:19:18Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-65f46c49b8-4frl5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:18:53Z lastTimestamp:2025-11-05T05:19:18Z reason:Unhealthy]}" time="2025-11-05T05:19:19Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:9 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T05:19:19Z reason:ProbeError]}" time="2025-11-05T05:19:19Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:9 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T05:19:19Z reason:Unhealthy]}" time="2025-11-05T05:19:23Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-65f46c49b8-4frl5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:18:53Z lastTimestamp:2025-11-05T05:19:23Z reason:Unhealthy]}" time="2025-11-05T05:19:24Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:10 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T05:19:24Z reason:ProbeError]}" time="2025-11-05T05:19:24Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:10 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T05:19:24Z reason:Unhealthy]}" time="2025-11-05T05:19:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-65f46c49b8-4frl5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:18:53Z lastTimestamp:2025-11-05T05:19:28Z reason:Unhealthy]}" time="2025-11-05T05:19:29Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:11 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T05:19:29Z reason:ProbeError]}" time="2025-11-05T05:19:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-65f46c49b8-4frl5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:18:53Z lastTimestamp:2025-11-05T05:19:33Z reason:Unhealthy]}" I1105 05:19:37.737991 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:19:38Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-65f46c49b8-4frl5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:18:53Z lastTimestamp:2025-11-05T05:19:38Z reason:Unhealthy]}" time="2025-11-05T05:19:42Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T05:19:43Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:19111182a7 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-65f46c49b8-4frl5]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.13:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:19:43Z lastTimestamp:2025-11-05T05:19:43Z reason:ProbeError]}" time="2025-11-05T05:19:43Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1b3134c6e0 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-65f46c49b8-4frl5]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.13:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:19:43Z lastTimestamp:2025-11-05T05:19:43Z reason:Unhealthy]}" time="2025-11-05T05:19:43Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T05:19:44Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T05:19:45Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T05:19:46Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T05:19:47Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T05:19:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:19111182a7 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-65f46c49b8-4frl5]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.13:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:19:43Z lastTimestamp:2025-11-05T05:19:48Z reason:ProbeError]}" time="2025-11-05T05:19:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1b3134c6e0 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-65f46c49b8-4frl5]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.13:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:19:43Z lastTimestamp:2025-11-05T05:19:48Z reason:Unhealthy]}" time="2025-11-05T05:19:48Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T05:19:49Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/2a143f31-f1a3-4781-9ee8-3034e8eaa6fc container/etcd mirror-uid/1999142741a516c8879c614b4ee8c47f" time="2025-11-05T05:19:50Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:19:51Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:19:52Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:19:53Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:19111182a7 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-65f46c49b8-4frl5]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.13:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.13:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:19:43Z lastTimestamp:2025-11-05T05:19:53Z reason:ProbeError]}" time="2025-11-05T05:19:53Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:19:54Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:19:55Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:20:27Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-697848cdf6-plsnz]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:20:27Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-697848cdf6-plsnz]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:20:27Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:83768cdc76 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[firstTimestamp:2025-11-05T05:20:27Z lastTimestamp:2025-11-05T05:20:27Z reason:SetDesiredConfig]}" I1105 05:20:38.001797 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:20:40Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:16 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:20:40Z reason:ProbeError]}" time="2025-11-05T05:20:55Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:66d66c84b6 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[firstTimestamp:2025-11-05T05:20:55Z lastTimestamp:2025-11-05T05:20:55Z reason:SetDesiredConfig]}" time="2025-11-05T05:21:26Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:16a31e5783 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[firstTimestamp:2025-11-05T05:21:26Z lastTimestamp:2025-11-05T05:21:26Z reason:SetDesiredConfig]}" I1105 05:21:38.284997 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:21:41Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[count:105 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T05:21:41Z reason:ProbeError]}" time="2025-11-05T05:21:41Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:feccdf558f namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused map[count:105 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T05:21:41Z reason:Unhealthy]}" time="2025-11-05T05:22:05Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/ab2ae6a0-fecb-41bc-a37b-0d7af2313109 container/etcd mirror-uid/3c630858f15f11de426651cdc74081c7" time="2025-11-05T05:22:06Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/ab2ae6a0-fecb-41bc-a37b-0d7af2313109 container/etcd mirror-uid/3c630858f15f11de426651cdc74081c7" time="2025-11-05T05:22:07Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/ab2ae6a0-fecb-41bc-a37b-0d7af2313109 container/etcd mirror-uid/3c630858f15f11de426651cdc74081c7" time="2025-11-05T05:22:08Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/ab2ae6a0-fecb-41bc-a37b-0d7af2313109 container/etcd mirror-uid/3c630858f15f11de426651cdc74081c7" time="2025-11-05T05:22:09Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/ab2ae6a0-fecb-41bc-a37b-0d7af2313109 container/etcd mirror-uid/3c630858f15f11de426651cdc74081c7" time="2025-11-05T05:22:10Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/ab2ae6a0-fecb-41bc-a37b-0d7af2313109 container/etcd mirror-uid/3c630858f15f11de426651cdc74081c7" time="2025-11-05T05:22:11Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/ab2ae6a0-fecb-41bc-a37b-0d7af2313109 container/etcd mirror-uid/3c630858f15f11de426651cdc74081c7" time="2025-11-05T05:22:12Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/ab2ae6a0-fecb-41bc-a37b-0d7af2313109 container/etcd mirror-uid/3c630858f15f11de426651cdc74081c7" time="2025-11-05T05:22:13Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:22:14Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:22:15Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:22:16Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:22:17Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:22:18Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" I1105 05:22:38.588695 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:23:01Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:83f021c4c2 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:18 firstTimestamp:2025-11-05T04:17:58Z lastTimestamp:2025-11-05T05:23:01Z reason:ProbeError]}" passed: (6m16s) 2025-11-05T05:23:27 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO password PolarionID:59426-ssh keys can be updated in new dir on RHCOS9 node" started: 22/26/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO password PolarionID:64986-Remove all ssh keys" I1105 05:23:38.852105 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:23:41Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:b3f54bb5c6 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-11ca7fd62e8dbecc1e6e692944a62411 map[firstTimestamp:2025-11-05T05:23:41Z lastTimestamp:2025-11-05T05:23:41Z reason:SetDesiredConfig]}" time="2025-11-05T05:23:43Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:23:43Z reason:ProbeError]}" time="2025-11-05T05:23:43Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:23:43Z reason:Unhealthy]}" time="2025-11-05T05:23:48Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:2 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:23:48Z reason:ProbeError]}" time="2025-11-05T05:23:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:23:48Z reason:Unhealthy]}" time="2025-11-05T05:23:53Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:3 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:23:53Z reason:ProbeError]}" time="2025-11-05T05:23:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:23:53Z reason:Unhealthy]}" time="2025-11-05T05:23:53Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:4 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:23:53Z reason:ProbeError]}" time="2025-11-05T05:23:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:23:53Z reason:Unhealthy]}" time="2025-11-05T05:23:58Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:5 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:23:58Z reason:ProbeError]}" time="2025-11-05T05:23:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:23:58Z reason:Unhealthy]}" time="2025-11-05T05:24:03Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:6 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:24:03Z reason:ProbeError]}" time="2025-11-05T05:24:03Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:24:03Z reason:Unhealthy]}" time="2025-11-05T05:24:08Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:7 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:24:08Z reason:ProbeError]}" time="2025-11-05T05:24:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:24:08Z reason:Unhealthy]}" time="2025-11-05T05:24:09Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:28cd160ac9 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt to MachineConfig: rendered-worker-11ca7fd62e8dbecc1e6e692944a62411 map[firstTimestamp:2025-11-05T05:24:09Z lastTimestamp:2025-11-05T05:24:09Z reason:SetDesiredConfig]}" time="2025-11-05T05:24:13Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:8 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:24:13Z reason:ProbeError]}" time="2025-11-05T05:24:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:24:13Z reason:Unhealthy]}" time="2025-11-05T05:24:18Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:9 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:24:18Z reason:ProbeError]}" time="2025-11-05T05:24:18Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:24:18Z reason:Unhealthy]}" time="2025-11-05T05:24:23Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:10 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:24:23Z reason:ProbeError]}" time="2025-11-05T05:24:23Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:24:23Z reason:Unhealthy]}" time="2025-11-05T05:24:28Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:11 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:24:28Z reason:ProbeError]}" time="2025-11-05T05:24:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:11 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:24:28Z reason:Unhealthy]}" time="2025-11-05T05:24:33Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:12 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:24:33Z reason:ProbeError]}" time="2025-11-05T05:24:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:12 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:24:33Z reason:Unhealthy]}" time="2025-11-05T05:24:33Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:24:33Z lastTimestamp:2025-11-05T05:24:33Z reason:ProbeError]}" time="2025-11-05T05:24:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:feccdf558f namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused map[firstTimestamp:2025-11-05T05:24:33Z lastTimestamp:2025-11-05T05:24:33Z reason:Unhealthy]}" time="2025-11-05T05:24:38Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:13 firstTimestamp:2025-11-05T05:23:43Z lastTimestamp:2025-11-05T05:24:38Z reason:ProbeError]}" I1105 05:24:39.141070 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:24:40Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:cd4227b27f machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr to MachineConfig: rendered-worker-11ca7fd62e8dbecc1e6e692944a62411 map[firstTimestamp:2025-11-05T05:24:40Z lastTimestamp:2025-11-05T05:24:40Z reason:SetDesiredConfig]}" time="2025-11-05T05:24:52Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:24:52Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:24:53Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:24:54Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:24:55Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:24:56Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:24:57Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:24:58Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:24:59Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:00Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:01Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:02Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:03Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:04Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:05Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:06Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:07Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:08Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:09Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:10Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:11Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:12Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:13Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:14Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/e059fa1d-922a-44d9-a5d3-0f7d4af805c8 container/etcd mirror-uid/a8de4a53c246fb831c494f7c1be104d3" time="2025-11-05T05:25:15Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/cb9b0d0f-df1f-4666-a9a2-a8179aa0b859 container/etcd mirror-uid/6ccfae29251e1b52524a0f025ba97b32" time="2025-11-05T05:25:15Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-controller-manager pod:controller-manager-6848447799-9dq2c]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:25:15Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-697848cdf6-plsnz]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:25:15Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-controller-manager pod:controller-manager-6848447799-9dq2c]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:25:16Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-oauth-apiserver pod:apiserver-8645679b75-zjp54]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:25:16Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-authentication pod:oauth-openshift-85b9b447d5-cts8l]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:25:16Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-zqfrv]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:25:16Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:713b7fda65 namespace:openshift-apiserver pod:apiserver-6d96f44c85-pgsrg]}" message="{FailedScheduling 0/7 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:25:16Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-697848cdf6-lrhfr]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:25:16Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-route-controller-manager pod:route-controller-manager-595bb8d55f-zqfrv]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:25:16Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/cb9b0d0f-df1f-4666-a9a2-a8179aa0b859 container/etcd mirror-uid/6ccfae29251e1b52524a0f025ba97b32" time="2025-11-05T05:25:17Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/cb9b0d0f-df1f-4666-a9a2-a8179aa0b859 container/etcd mirror-uid/6ccfae29251e1b52524a0f025ba97b32" time="2025-11-05T05:25:18Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/cb9b0d0f-df1f-4666-a9a2-a8179aa0b859 container/etcd mirror-uid/6ccfae29251e1b52524a0f025ba97b32" time="2025-11-05T05:25:19Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/cb9b0d0f-df1f-4666-a9a2-a8179aa0b859 container/etcd mirror-uid/6ccfae29251e1b52524a0f025ba97b32" I1105 05:25:39.428387 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:25:55Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:83768cdc76 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:2 firstTimestamp:2025-11-05T05:20:27Z lastTimestamp:2025-11-05T05:25:55Z reason:SetDesiredConfig]}" time="2025-11-05T05:26:23Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:66d66c84b6 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:2 firstTimestamp:2025-11-05T05:20:55Z lastTimestamp:2025-11-05T05:26:23Z reason:SetDesiredConfig]}" time="2025-11-05T05:26:39Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:18 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T05:26:39Z reason:ProbeError]}" I1105 05:26:39.678505 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:26:54Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:16a31e5783 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:2 firstTimestamp:2025-11-05T05:21:26Z lastTimestamp:2025-11-05T05:26:54Z reason:SetDesiredConfig]}" time="2025-11-05T05:27:05Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:27:05Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:27:06Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:27:07Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:27:08Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:27:09Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:25 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T05:27:09Z reason:ProbeError]}" time="2025-11-05T05:27:09Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:27:10Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:27:11Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:27:12Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:27:13Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/35f62781-21d9-451d-92bb-6b4f255dacc0 container/etcd mirror-uid/7a8a87a5532668c0c5165be0a33d4260" time="2025-11-05T05:27:14Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:27:15Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:27:16Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:27:17Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:27:18Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:27:19Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" I1105 05:27:39.980907 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' passed: (4m31s) 2025-11-05T05:27:58 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO password PolarionID:64986-Remove all ssh keys" started: 22/27/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO password PolarionID:59424-ssh keys can be found in new dir on RHCOS9 node" I1105 05:28:40.235539 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:28:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:20 firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:28:57Z reason:ProbeError]}" time="2025-11-05T05:28:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:20 firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:28:57Z reason:Unhealthy]}" time="2025-11-05T05:29:22Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:29:23Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:29:24Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:29:25Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:29:26Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:29:27Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:29:28Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:29:29Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:29:30Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" passed: (1m23s) 2025-11-05T05:29:31 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO password PolarionID:59424-ssh keys can be found in new dir on RHCOS9 node" started: 22/28/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][OCPFeatureGate:ManagedBootImagesAzure] Should update boot images on all MachineSets when configured [apigroup:machineconfiguration.openshift.io]" time="2025-11-05T05:29:31Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:29:32Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/66c924cd-5707-4376-9532-774e306bf7b7 container/etcd mirror-uid/925d23fb11765a1053c83220cad0e2e9" time="2025-11-05T05:29:33Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/edcb3c60-5559-4261-b4df-93425fd5bde4 container/etcd mirror-uid/5c614602262aa354cfc30f1678744e99" time="2025-11-05T05:29:34Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/edcb3c60-5559-4261-b4df-93425fd5bde4 container/etcd mirror-uid/5c614602262aa354cfc30f1678744e99" time="2025-11-05T05:29:35Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/edcb3c60-5559-4261-b4df-93425fd5bde4 container/etcd mirror-uid/5c614602262aa354cfc30f1678744e99" time="2025-11-05T05:29:36Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/edcb3c60-5559-4261-b4df-93425fd5bde4 container/etcd mirror-uid/5c614602262aa354cfc30f1678744e99" time="2025-11-05T05:29:37Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/edcb3c60-5559-4261-b4df-93425fd5bde4 container/etcd mirror-uid/5c614602262aa354cfc30f1678744e99" I1105 05:29:40.496213 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' skip [github.com/openshift/machine-config-operator/test/extended/boot_image.go:40]: This test only applies to Azure platform skipped: (9.2s) 2025-11-05T05:29:50 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][OCPFeatureGate:ManagedBootImagesAzure] Should update boot images on all MachineSets when configured [apigroup:machineconfiguration.openshift.io]" started: 22/29/55 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:PinnedImages][Disruptive] All Nodes in a Custom Pool should have the PinnedImages in PIS [apigroup:machineconfiguration.openshift.io] [Serial]" time="2025-11-05T05:30:02Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:099d4d3fd3 machineconfigpool:custom namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-custom-68e6c340dbef76691f081bbf7159850a map[firstTimestamp:2025-11-05T05:30:02Z lastTimestamp:2025-11-05T05:30:02Z reason:SetDesiredConfig]}" I1105 05:30:40.732232 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:30:49Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:83768cdc76 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:3 firstTimestamp:2025-11-05T05:20:27Z lastTimestamp:2025-11-05T05:30:49Z reason:SetDesiredConfig]}" passed: (1m23s) 2025-11-05T05:31:14 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:PinnedImages][Disruptive] All Nodes in a Custom Pool should have the PinnedImages in PIS [apigroup:machineconfiguration.openshift.io] [Serial]" started: 22/30/55 "[sig-mco][OCPFeatureGate:MachineConfigNodes] [Suite:openshift/machine-config-operator/disruptive][Disruptive]Should properly report MCN conditions on node degrade [apigroup:machineconfiguration.openshift.io] [Serial]" time="2025-11-05T05:31:23Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:21db5ad3a6 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig (combined from similar events): Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-8a98d43b962eca126a19a573fd9a788f map[firstTimestamp:2025-11-05T05:31:23Z lastTimestamp:2025-11-05T05:31:23Z reason:SetDesiredConfig]}" time="2025-11-05T05:31:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-tbbfw]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:31:32Z lastTimestamp:2025-11-05T05:31:32Z reason:Unhealthy]}" time="2025-11-05T05:31:39Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5d763b3dcc namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:monitoring-plugin-79f9bc6c-kd6p2]}" message="{ProbeError Readiness probe error: Get \"https://10.129.2.28:9443/health\": dial tcp 10.129.2.28:9443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:31:39Z lastTimestamp:2025-11-05T05:31:39Z reason:ProbeError]}" time="2025-11-05T05:31:39Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d9a3a9da1e namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:monitoring-plugin-79f9bc6c-kd6p2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.2.28:9443/health\": dial tcp 10.129.2.28:9443: connect: connection refused map[firstTimestamp:2025-11-05T05:31:39Z lastTimestamp:2025-11-05T05:31:39Z reason:Unhealthy]}" I1105 05:31:40.995807 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:31:42Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-tbbfw]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:31:32Z lastTimestamp:2025-11-05T05:31:42Z reason:Unhealthy]}" time="2025-11-05T05:31:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-tbbfw]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:31:32Z lastTimestamp:2025-11-05T05:31:52Z reason:Unhealthy]}" time="2025-11-05T05:32:02Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-tbbfw]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:31:32Z lastTimestamp:2025-11-05T05:32:02Z reason:Unhealthy]}" time="2025-11-05T05:32:12Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-tbbfw]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:31:32Z lastTimestamp:2025-11-05T05:32:12Z reason:Unhealthy]}" I1105 05:32:41.381185 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 05:33:41.653252 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' passed: (2m31s) 2025-11-05T05:33:47 "[sig-mco][OCPFeatureGate:MachineConfigNodes] [Suite:openshift/machine-config-operator/disruptive][Disruptive]Should properly report MCN conditions on node degrade [apigroup:machineconfiguration.openshift.io] [Serial]" started: 22/31/55 "[sig-etcd][Feature:DisasterRecovery][Suite:openshift/etcd/recovery][Timeout:1h] [Feature:EtcdRecovery][Disruptive] Recover with quorum restore [Serial]" I1105 05:34:41.903883 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:35:22Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:29 firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:35:22Z reason:ProbeError]}" time="2025-11-05T05:35:22Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:29 firstTimestamp:2025-11-05T05:15:52Z lastTimestamp:2025-11-05T05:35:22Z reason:Unhealthy]}" I1105 05:35:42.381245 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:35:47Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/edcb3c60-5559-4261-b4df-93425fd5bde4 container/etcd mirror-uid/5c614602262aa354cfc30f1678744e99" time="2025-11-05T05:35:47Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/edcb3c60-5559-4261-b4df-93425fd5bde4 container/etcd mirror-uid/5c614602262aa354cfc30f1678744e99" time="2025-11-05T05:35:49Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:2366fe9c05 node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T05:35:49Z lastTimestamp:2025-11-05T05:35:49Z reason:NodeHasSufficientMemory roles:control-plane,master]}" time="2025-11-05T05:35:49Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:31c7a499dc node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T05:35:49Z lastTimestamp:2025-11-05T05:35:49Z reason:NodeHasNoDiskPressure roles:control-plane,master]}" time="2025-11-05T05:35:49Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:4c6580353a node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T05:35:49Z lastTimestamp:2025-11-05T05:35:49Z reason:NodeHasSufficientPID roles:control-plane,master]}" time="2025-11-05T05:35:49Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: ContainerCreating" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/e399d651-c8be-4a31-b6ea-90a2344e60eb container/etcd mirror-uid/8a2f7410bb740c6451c462467e6eb02b" E1105 05:36:23.202270 1669 pod_ip_controller.go:75] "Unhandled Error" err=< invalid queue key '{openshift-kube-apiserver/revision-pruner-11-ci-op-x0f88pwp-f3da4-d9fgd-master-2 &Pod{ObjectMeta:{revision-pruner-11-ci-op-x0f88pwp-f3da4-d9fgd-master-2 openshift-kube-apiserver f9b92390-e810-4ef1-9397-5f36f179a98b 64460 1 2025-11-05 05:35:47 +0000 UTC map[app:pruner] map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.129.0.114/23"],"mac_address":"0a:58:0a:81:00:72","gateway_ips":["10.129.0.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.129.0.1"},{"dest":"172.30.0.0/16","nextHop":"10.129.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.129.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.129.0.1"}],"ip_address":"10.129.0.114/23","gateway_ip":"10.129.0.1","role":"primary"}} k8s.v1.cni.cncf.io/network-status:[{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.129.0.114" ], "mac": "0a:58:0a:81:00:72", "default": true, "dns": {} }]] [{v1 ConfigMap revision-status-11 355a0755-d553-4433-9cc5-720796b01561 }] [] [{ci-op-x0f88pwp-f3da4-d9fgd-master-2 Update v1 2025-11-05 05:35:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.ovn.org/pod-networks":{}}}} status} {cluster-kube-apiserver-operator Update v1 2025-11-05 05:35:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"355a0755-d553-4433-9cc5-720796b01561\"}":{}}},"f:spec":{"f:automountServiceAccountToken":{},"f:containers":{"k:{\"name\":\"pruner\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:privileged":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeName":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{".":{},"f:runAsUser":{}},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"kube-api-access\"}":{".":{},"f:name":{},"f:projected":{".":{},"f:defaultMode":{},"f:sources":{}}},"k:{\"name\":\"kubelet-dir\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}}}}} } {multus-daemon Update v1 2025-11-05 05:35:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2025-11-05 05:35:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{".":{},"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodReadyToStartContainers\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:hostIPs":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.129.0.114\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kubelet-dir,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:kube-api-access,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3600,Path:token,},ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},},Containers:[]Container{Container{Name:pruner,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:2b4a7094f94bb39adc6827f1d01aa1ef3734eff3d3f87d18b9a3641f111dae14,Command:[cluster-kube-apiserver-operator prune],Args:[-v=4 --max-eligible-revision=11 --protected-revisions=5,6,7,8,9,10,11 --resource-dir=/etc/kubernetes/static-pod-resources --cert-dir=kube-apiserver-certs --static-pod-name=kube-apiserver-pod],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Requests:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/etc/kubernetes/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},},RestartPolicy:Never,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:installer-sa,DeprecatedServiceAccount:installer-sa,NodeName:ci-op-x0f88pwp-f3da4-d9fgd-master-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,AppArmorProfile:nil,SupplementalGroupsPolicy:nil,SELinuxChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:installer-sa-dockercfg-8kzds,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:*false,Tolerations:[]Toleration{Toleration{Key:,Operator:Exists,Value:,Effect:,TolerationSeconds:nil,},},HostAliases:[]HostAlias{},PriorityClassName:system-node-critical,Priority:*2000001000,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},Resources:nil,HostnameOverride:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:PodReadyToStartContainers,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:35:49 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:35:47 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:1,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:35:50 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:1,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:35:50 +0000 UTC,Reason:PodCompleted,Message:,ObservedGeneration:1,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:35:47 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},},Message:,Reason:,HostIP:10.0.0.5,PodIP:10.129.0.114,StartTime:2025-11-05 05:35:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pruner,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-11-05 05:35:48 +0000 UTC,FinishedAt:2025-11-05 05:35:49 +0000 UTC,ContainerID:cri-o://59c73b52bdaa27e6d71b5650a49dc029a1876201c209e65de149115c3f69a3d2,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:2b4a7094f94bb39adc6827f1d01aa1ef3734eff3d3f87d18b9a3641f111dae14,ImageID:quay-proxy.ci.openshift.org/openshift/ci@sha256:2b4a7094f94bb39adc6827f1d01aa1ef3734eff3d3f87d18b9a3641f111dae14,ContainerID:cri-o://59c73b52bdaa27e6d71b5650a49dc029a1876201c209e65de149115c3f69a3d2,Started:*false,AllocatedResources:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Resources:&ResourceRequirements{Limits:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Requests:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:kubelet-dir,MountPath:/etc/kubernetes/,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:kube-api-access,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:&ContainerUser{Linux:&LinuxContainerUser{UID:0,GID:0,SupplementalGroups:[0],},},AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},},QOSClass:Guaranteed,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.129.0.114,},},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,ResourceClaimStatuses:[]PodResourceClaimStatus{},HostIPs:[]HostIP{HostIP{IP:10.0.0.5,},},ObservedGeneration:1,ExtendedResourceClaimStatus:nil,},}}': object has no meta: object does not implement the Object interfaces > E1105 05:36:23.202754 1669 pod_ip_controller.go:75] "Unhandled Error" err="invalid queue key '{openshift-kube-apiserver/revision-pruner-11-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 &Pod{ObjectMeta:{revision-pruner-11-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 openshift-kube-apiserver c5ed3d72-64c5-4c68-89ec-28812fe18ff8 64445 1 2025-11-05 05:35:49 +0000 UTC map[app:pruner] map[k8s.ovn.org/pod-networks:{\"default\":{\"ip_addresses\":[\"10.130.2.80/23\"],\"mac_address\":\"0a:58:0a:82:02:50\",\"gateway_ips\":[\"10.130.2.1\"],\"routes\":[{\"dest\":\"10.128.0.0/14\",\"nextHop\":\"10.130.2.1\"},{\"dest\":\"172.30.0.0/16\",\"nextHop\":\"10.130.2.1\"},{\"dest\":\"169.254.0.5/32\",\"nextHop\":\"10.130.2.1\"},{\"dest\":\"100.64.0.0/16\",\"nextHop\":\"10.130.2.1\"}],\"ip_address\":\"10.130.2.80/23\",\"gateway_ip\":\"10.130.2.1\",\"role\":\"primary\"}}] [{v1 ConfigMap revision-status-11 355a0755-d553-4433-9cc5-720796b01561 }] [] [{ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 Update v1 2025-11-05 05:35:49 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:k8s.ovn.org/pod-networks\":{}}}} status} {cluster-kube-apiserver-operator Update v1 2025-11-05 05:35:49 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:labels\":{\".\":{},\"f:app\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"355a0755-d553-4433-9cc5-720796b01561\\\"}\":{}}},\"f:spec\":{\"f:automountServiceAccountToken\":{},\"f:containers\":{\"k:{\\\"name\\\":\\\"pruner\\\"}\":{\".\":{},\"f:args\":{},\"f:command\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{\".\":{},\"f:limits\":{\".\":{},\"f:cpu\":{},\"f:memory\":{}},\"f:requests\":{\".\":{},\"f:cpu\":{},\"f:memory\":{}}},\"f:securityContext\":{\".\":{},\"f:privileged\":{},\"f:runAsUser\":{}},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/etc/kubernetes/\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}},\"k:{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{},\"f:readOnly\":{}}}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:nodeName\":{},\"f:priorityClassName\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{\".\":{},\"f:runAsUser\":{}},\"f:serviceAccount\":{},\"f:serviceAccountName\":{},\"f:terminationGracePeriodSeconds\":{},\"f:tolerations\":{},\"f:volumes\":{\".\":{},\"k:{\\\"name\\\":\\\"kube-api-access\\\"}\":{\".\":{},\"f:name\":{},\"f:projected\":{\".\":{},\"f:defaultMode\":{},\"f:sources\":{}}},\"k:{\\\"name\\\":\\\"kubelet-dir\\\"}\":{\".\":{},\"f:hostPath\":{\".\":{},\"f:path\":{},\"f:type\":{}},\"f:name\":{}}}}} } {kubelet Update v1 2025-11-05 05:35:49 +0000 UTC FieldsV1 {\"f:status\":{\"f:conditions\":{\".\":{},\"k:{\\\"type\\\":\\\"ContainersReady\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:observedGeneration\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Initialized\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:observedGeneration\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"PodReadyToStartContainers\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:observedGeneration\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"PodScheduled\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:observedGeneration\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Ready\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:observedGeneration\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}}},\"f:containerStatuses\":{},\"f:hostIP\":{},\"f:hostIPs\":{},\"f:observedGeneration\":{},\"f:startTime\":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kubelet-dir,VolumeSource:VolumeSource{HostPath:&HostPathVolumeSource{Path:/etc/kubernetes/,Type:*,},EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:kube-api-access,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3600,Path:token,},ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},},Containers:[]Container{Container{Name:pruner,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:2b4a7094f94bb39adc6827f1d01aa1ef3734eff3d3f87d18b9a3641f111dae14,Command:[cluster-kube-apiserver-operator prune],Args:[-v=4 --max-eligible-revision=11 --protected-revisions=5,6,7,8,9,10,11 --resource-dir=/etc/kubernetes/static-pod-resources --cert-dir=kube-apiserver-certs --static-pod-name=kube-apiserver-pod],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Requests:ResourceList{cpu: {{150 -3} {} 150m DecimalSI},memory: {{200 6} {} 200M DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/etc/kubernetes/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},},RestartPolicy:Never,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:installer-sa,DeprecatedServiceAccount:installer-sa,NodeName:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,AppArmorProfile:nil,SupplementalGroupsPolicy:nil,SELinuxChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:installer-sa-dockercfg-8kzds,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:*false,Tolerations:[]Toleration{Toleration{Key:,Operator:Exists,Value:,Effect:,TolerationSeconds:nil,},},HostAliases:[]HostAlias{},PriorityClassName:system-node-critical,Priority:*2000001000,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},Resources:nil,HostnameOverride:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodReadyToStartContainers,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:35:49 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:35:49 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:35:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [pruner],ObservedGeneration:1,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:35:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [pruner],ObservedGeneration:1,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 05:35:49 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},},Message:,Reason:,HostIP:10.0.0.7,PodIP:,StartTime:2025-11-05 05:35:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pruner,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay-proxy.ci.openshift.org/openshift/ci@sha256:2b4a7094f94bb39adc6827f1d01aa1ef3734eff3d3f87d18b9a3641f111dae14,ImageID:,ContainerID:,Started:*false,AllocatedResources:ResourceList{},Resources:nil,VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:kubelet-dir,MountPath:/etc/kubernetes/,ReadOnly:false,RecursiveReadOnly:nil,},VolumeMountStatus{Name:kube-api-access,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:nil,AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},},QOSClass:Guaranteed,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,ResourceClaimStatuses:[]PodResourceClaimStatus{},HostIPs:[]HostIP{HostIP{IP:10.0.0.7,},},ObservedGeneration:1,ExtendedResourceClaimStatus:nil,},}}': object has no meta: object does not implement the Object interfaces" time="2025-11-05T05:36:25Z" level=info msg="event interval matches PodSandbox" locator="{Kind map[hmsg:863959306c namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:revision-pruner-11-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-kube-apiserver_c5ed3d72-64c5-4c68-89ec-28812fe18ff8_0(76d640bf78686541b4040685fc437808033dfb1b60800a9ffb9e70ad2ac96d99): error adding pod openshift-kube-apiserver_revision-pruner-11-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 to CNI network \"multus-cni-network\": plugin type=\"multus-shim\" name=\"multus-cni-network\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\"76d640bf78686541b4040685fc437808033dfb1b60800a9ffb9e70ad2ac96d99\" Netns:\"/var/run/netns/8adba425-441a-4118-b219-1197bb83df97\" IfName:\"eth0\" Args:\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=revision-pruner-11-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0;K8S_POD_INFRA_CONTAINER_ID=76d640bf78686541b4040685fc437808033dfb1b60800a9ffb9e70ad2ac96d99;K8S_POD_UID=c5ed3d72-64c5-4c68-89ec-28812fe18ff8\" Path:\"\" ERRORED: error configuring pod [openshift-kube-apiserver/revision-pruner-11-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0] networking: Multus: [openshift-kube-apiserver/revision-pruner-11-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0/c5ed3d72-64c5-4c68-89ec-28812fe18ff8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod revision-pruner-11-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 in out of cluster comm: SetNetworkStatus: failed to update the pod revision-pruner-11-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 in out of cluster comm: status update failed for pod /: pods \"revision-pruner-11-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" not found\n': StdinData: {\"auxiliaryCNIChainName\":\"vendor-cni-chain\",\"binDir\":\"/var/lib/cni/bin\",\"clusterNetwork\":\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\",\"cniVersion\":\"0.3.1\",\"daemonSocketDir\":\"/run/multus/socket\",\"globalNamespaces\":\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\",\"logLevel\":\"verbose\",\"logToStderr\":true,\"name\":\"multus-cni-network\",\"namespaceIsolation\":true,\"type\":\"multus-shim\"} map[firstTimestamp:2025-11-05T05:36:19Z lastTimestamp:2025-11-05T05:36:19Z reason:FailedCreatePodSandBox]}" time="2025-11-05T05:36:25Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:f98b6f42c2 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused\nbody: \n map[count:27 firstTimestamp:2025-11-05T04:22:27Z lastTimestamp:2025-11-05T05:36:22Z reason:ProbeError]}" time="2025-11-05T05:36:25Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:7f6d64717b namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused map[count:27 firstTimestamp:2025-11-05T04:22:27Z lastTimestamp:2025-11-05T05:36:22Z reason:Unhealthy]}" time="2025-11-05T05:36:25Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:11 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:23Z reason:ProbeError]}" time="2025-11-05T05:36:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:11 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:23Z reason:Unhealthy]}" time="2025-11-05T05:36:25Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:f305fcc059 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Startup probe error: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:36:14Z lastTimestamp:2025-11-05T05:36:24Z reason:ProbeError]}" time="2025-11-05T05:36:25Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1028212dbd namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Startup probe failed: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:36:14Z lastTimestamp:2025-11-05T05:36:24Z reason:Unhealthy]}" time="2025-11-05T05:36:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:f98b6f42c2 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused\nbody: \n map[count:28 firstTimestamp:2025-11-05T04:22:27Z lastTimestamp:2025-11-05T05:36:27Z reason:ProbeError]}" time="2025-11-05T05:36:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:7f6d64717b namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused map[count:28 firstTimestamp:2025-11-05T04:22:27Z lastTimestamp:2025-11-05T05:36:27Z reason:Unhealthy]}" time="2025-11-05T05:36:28Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:12 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:28Z reason:ProbeError]}" time="2025-11-05T05:36:28Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:12 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:28Z reason:Unhealthy]}" time="2025-11-05T05:36:32Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:f98b6f42c2 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused\nbody: \n map[count:29 firstTimestamp:2025-11-05T04:22:27Z lastTimestamp:2025-11-05T05:36:32Z reason:ProbeError]}" time="2025-11-05T05:36:32Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:7f6d64717b namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused map[count:29 firstTimestamp:2025-11-05T04:22:27Z lastTimestamp:2025-11-05T05:36:32Z reason:Unhealthy]}" time="2025-11-05T05:36:33Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:13 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:33Z reason:ProbeError]}" time="2025-11-05T05:36:33Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:13 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:33Z reason:Unhealthy]}" time="2025-11-05T05:36:33Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:14 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:33Z reason:ProbeError]}" time="2025-11-05T05:36:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:14 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:33Z reason:Unhealthy]}" time="2025-11-05T05:36:34Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:f305fcc059 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Startup probe error: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:36:14Z lastTimestamp:2025-11-05T05:36:34Z reason:ProbeError]}" time="2025-11-05T05:36:34Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1028212dbd namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Startup probe failed: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused map[count:3 firstTimestamp:2025-11-05T05:36:14Z lastTimestamp:2025-11-05T05:36:34Z reason:Unhealthy]}" time="2025-11-05T05:36:35Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:36:35Z reason:ConfigMissing]}" time="2025-11-05T05:36:35Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[count:2 firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:36:35Z reason:ConfigMissing]}" time="2025-11-05T05:36:35Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[count:3 firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:36:35Z reason:ConfigMissing]}" time="2025-11-05T05:36:35Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[count:4 firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:36:35Z reason:ConfigMissing]}" time="2025-11-05T05:36:35Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[count:5 firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:36:35Z reason:ConfigMissing]}" time="2025-11-05T05:36:35Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[count:6 firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:36:35Z reason:ConfigMissing]}" time="2025-11-05T05:36:35Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[count:7 firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:36:35Z reason:ConfigMissing]}" time="2025-11-05T05:36:35Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[count:8 firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:36:35Z reason:ConfigMissing]}" time="2025-11-05T05:36:36Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[count:9 firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:36:36Z reason:ConfigMissing]}" time="2025-11-05T05:36:37Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[count:10 firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:36:37Z reason:ConfigMissing]}" time="2025-11-05T05:36:38Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:15 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:38Z reason:ProbeError]}" time="2025-11-05T05:36:38Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:15 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:38Z reason:Unhealthy]}" time="2025-11-05T05:36:38Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-697848cdf6-plsnz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:36:38Z lastTimestamp:2025-11-05T05:36:38Z reason:Unhealthy]}" time="2025-11-05T05:36:40Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[count:11 firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:36:40Z reason:ConfigMissing]}" I1105 05:36:42.655660 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:36:43Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:16 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:43Z reason:ProbeError]}" time="2025-11-05T05:36:43Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:16 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:43Z reason:Unhealthy]}" time="2025-11-05T05:36:43Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-697848cdf6-plsnz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:36:38Z lastTimestamp:2025-11-05T05:36:43Z reason:Unhealthy]}" time="2025-11-05T05:36:45Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[count:12 firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:36:45Z reason:ConfigMissing]}" time="2025-11-05T05:36:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:17 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:48Z reason:ProbeError]}" time="2025-11-05T05:36:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:17 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:48Z reason:Unhealthy]}" time="2025-11-05T05:36:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-mj7sv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:36:48Z lastTimestamp:2025-11-05T05:36:48Z reason:Unhealthy]}" time="2025-11-05T05:36:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-697848cdf6-plsnz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:36:38Z lastTimestamp:2025-11-05T05:36:48Z reason:Unhealthy]}" time="2025-11-05T05:36:48Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[count:13 firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:36:48Z reason:ConfigMissing]}" time="2025-11-05T05:36:50Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/cb9b0d0f-df1f-4666-a9a2-a8179aa0b859 container/etcd mirror-uid/6ccfae29251e1b52524a0f025ba97b32" time="2025-11-05T05:36:50Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/cb9b0d0f-df1f-4666-a9a2-a8179aa0b859 container/etcd mirror-uid/6ccfae29251e1b52524a0f025ba97b32" time="2025-11-05T05:36:51Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/cb9b0d0f-df1f-4666-a9a2-a8179aa0b859 container/etcd mirror-uid/6ccfae29251e1b52524a0f025ba97b32" time="2025-11-05T05:36:52Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/cb9b0d0f-df1f-4666-a9a2-a8179aa0b859 container/etcd mirror-uid/6ccfae29251e1b52524a0f025ba97b32" time="2025-11-05T05:36:53Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:18 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:53Z reason:ProbeError]}" time="2025-11-05T05:36:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:18 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:53Z reason:Unhealthy]}" time="2025-11-05T05:36:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-mj7sv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:36:48Z lastTimestamp:2025-11-05T05:36:53Z reason:Unhealthy]}" time="2025-11-05T05:36:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-697848cdf6-plsnz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:36:38Z lastTimestamp:2025-11-05T05:36:53Z reason:Unhealthy]}" time="2025-11-05T05:36:53Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/cb9b0d0f-df1f-4666-a9a2-a8179aa0b859 container/etcd mirror-uid/6ccfae29251e1b52524a0f025ba97b32" time="2025-11-05T05:36:54Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/cb9b0d0f-df1f-4666-a9a2-a8179aa0b859 container/etcd mirror-uid/6ccfae29251e1b52524a0f025ba97b32" time="2025-11-05T05:36:55Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:47d5e51f9e namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(3391d6995136986bfa132bac3ac575e2) map[firstTimestamp:2025-11-05T05:36:55Z lastTimestamp:2025-11-05T05:36:55Z reason:BackOff]}" time="2025-11-05T05:36:55Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/cb9b0d0f-df1f-4666-a9a2-a8179aa0b859 container/etcd mirror-uid/6ccfae29251e1b52524a0f025ba97b32" time="2025-11-05T05:36:55Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[count:14 firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:36:55Z reason:ConfigMissing]}" time="2025-11-05T05:36:56Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/cb9b0d0f-df1f-4666-a9a2-a8179aa0b859 container/etcd mirror-uid/6ccfae29251e1b52524a0f025ba97b32" time="2025-11-05T05:36:57Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/cb9b0d0f-df1f-4666-a9a2-a8179aa0b859 container/etcd mirror-uid/6ccfae29251e1b52524a0f025ba97b32" time="2025-11-05T05:36:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:19 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:58Z reason:ProbeError]}" time="2025-11-05T05:36:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:19 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:36:58Z reason:Unhealthy]}" time="2025-11-05T05:36:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-mj7sv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:36:48Z lastTimestamp:2025-11-05T05:36:58Z reason:Unhealthy]}" time="2025-11-05T05:36:58Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/c793e23a-ffa9-4bba-abbc-91f8b3e55ed6 container/etcd mirror-uid/13523fd0d09de28c2fb06ef7bc236ba9" time="2025-11-05T05:36:59Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/c793e23a-ffa9-4bba-abbc-91f8b3e55ed6 container/etcd mirror-uid/13523fd0d09de28c2fb06ef7bc236ba9" time="2025-11-05T05:36:59Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:47d5e51f9e namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(3391d6995136986bfa132bac3ac575e2) map[count:2 firstTimestamp:2025-11-05T05:36:55Z lastTimestamp:2025-11-05T05:36:59Z reason:BackOff]}" time="2025-11-05T05:37:00Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/c793e23a-ffa9-4bba-abbc-91f8b3e55ed6 container/etcd mirror-uid/13523fd0d09de28c2fb06ef7bc236ba9" time="2025-11-05T05:37:00Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:47d5e51f9e namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(3391d6995136986bfa132bac3ac575e2) map[count:3 firstTimestamp:2025-11-05T05:36:55Z lastTimestamp:2025-11-05T05:37:00Z reason:BackOff]}" time="2025-11-05T05:37:01Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/c793e23a-ffa9-4bba-abbc-91f8b3e55ed6 container/etcd mirror-uid/13523fd0d09de28c2fb06ef7bc236ba9" time="2025-11-05T05:37:02Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/c793e23a-ffa9-4bba-abbc-91f8b3e55ed6 container/etcd mirror-uid/13523fd0d09de28c2fb06ef7bc236ba9" time="2025-11-05T05:37:03Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:20 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:37:03Z reason:ProbeError]}" time="2025-11-05T05:37:03Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:20 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:37:03Z reason:Unhealthy]}" time="2025-11-05T05:37:03Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-mj7sv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:36:48Z lastTimestamp:2025-11-05T05:37:03Z reason:Unhealthy]}" time="2025-11-05T05:37:03Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/c793e23a-ffa9-4bba-abbc-91f8b3e55ed6 container/etcd mirror-uid/13523fd0d09de28c2fb06ef7bc236ba9" time="2025-11-05T05:37:08Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:21 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T05:37:08Z reason:ProbeError]}" time="2025-11-05T05:37:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-mj7sv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:36:48Z lastTimestamp:2025-11-05T05:37:08Z reason:Unhealthy]}" time="2025-11-05T05:37:08Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:90427cd033 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:24 firstTimestamp:2025-11-05T04:52:54Z lastTimestamp:2025-11-05T05:37:08Z reason:ProbeError]}" time="2025-11-05T05:37:11Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-fb8977648-6znqf]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:37:11Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-5767458856-dcdr6]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:37:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-mj7sv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:36:48Z lastTimestamp:2025-11-05T05:37:13Z reason:Unhealthy]}" time="2025-11-05T05:37:18Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[count:15 firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:37:18Z reason:ConfigMissing]}" time="2025-11-05T05:37:33Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:47d5e51f9e namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(3391d6995136986bfa132bac3ac575e2) map[count:4 firstTimestamp:2025-11-05T05:36:55Z lastTimestamp:2025-11-05T05:37:32Z reason:BackOff]}" time="2025-11-05T05:37:34Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:47d5e51f9e namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(3391d6995136986bfa132bac3ac575e2) map[count:5 firstTimestamp:2025-11-05T05:36:55Z lastTimestamp:2025-11-05T05:37:34Z reason:BackOff]}" time="2025-11-05T05:37:36Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{Kind map[deployment:kube-apiserver-operator hmsg:9ca73b16c2 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.8:2379] map[count:16 firstTimestamp:2025-11-05T05:36:35Z lastTimestamp:2025-11-05T05:37:36Z reason:ConfigMissing]}" time="2025-11-05T05:37:37Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:5a79959828 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[firstTimestamp:2025-11-05T05:37:37Z lastTimestamp:2025-11-05T05:37:37Z reason:ProbeError]}" time="2025-11-05T05:37:37Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:95 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T05:37:37Z reason:Unhealthy]}" time="2025-11-05T05:37:41Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-5767458856-fsrr7]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:37:42Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-6rxkx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:37:42Z lastTimestamp:2025-11-05T05:37:42Z reason:Unhealthy]}" time="2025-11-05T05:37:42Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:5a79959828 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:2 firstTimestamp:2025-11-05T05:37:37Z lastTimestamp:2025-11-05T05:37:42Z reason:ProbeError]}" time="2025-11-05T05:37:42Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:96 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T05:37:42Z reason:Unhealthy]}" I1105 05:37:42.890234 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:37:46Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:47d5e51f9e namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(3391d6995136986bfa132bac3ac575e2) map[count:6 firstTimestamp:2025-11-05T05:36:55Z lastTimestamp:2025-11-05T05:37:46Z reason:BackOff]}" time="2025-11-05T05:37:47Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-6rxkx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:37:42Z lastTimestamp:2025-11-05T05:37:47Z reason:Unhealthy]}" time="2025-11-05T05:37:47Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-567c95b6d8-pwkkg]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:37:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-6rxkx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:37:42Z lastTimestamp:2025-11-05T05:37:52Z reason:Unhealthy]}" time="2025-11-05T05:37:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-6rxkx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:37:42Z lastTimestamp:2025-11-05T05:37:57Z reason:Unhealthy]}" time="2025-11-05T05:38:02Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-6rxkx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:37:42Z lastTimestamp:2025-11-05T05:38:02Z reason:Unhealthy]}" time="2025-11-05T05:38:07Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-6rxkx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:37:42Z lastTimestamp:2025-11-05T05:38:07Z reason:Unhealthy]}" time="2025-11-05T05:38:12Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-6rxkx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:37:42Z lastTimestamp:2025-11-05T05:38:12Z reason:Unhealthy]}" time="2025-11-05T05:38:17Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:17Z reason:ProbeError]}" time="2025-11-05T05:38:17Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:5 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:17Z reason:Unhealthy]}" time="2025-11-05T05:38:17Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-6rxkx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:37:42Z lastTimestamp:2025-11-05T05:38:17Z reason:Unhealthy]}" time="2025-11-05T05:38:22Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:22Z reason:ProbeError]}" time="2025-11-05T05:38:22Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:6 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:22Z reason:Unhealthy]}" time="2025-11-05T05:38:22Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-6rxkx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:37:42Z lastTimestamp:2025-11-05T05:38:22Z reason:Unhealthy]}" time="2025-11-05T05:38:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:7 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:27Z reason:ProbeError]}" time="2025-11-05T05:38:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:7 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:27Z reason:Unhealthy]}" time="2025-11-05T05:38:27Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:8 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:27Z reason:ProbeError]}" time="2025-11-05T05:38:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:8 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:27Z reason:Unhealthy]}" time="2025-11-05T05:38:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-6rxkx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:37:42Z lastTimestamp:2025-11-05T05:38:27Z reason:Unhealthy]}" time="2025-11-05T05:38:28Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-8fbf6797c-q4ncx]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:38:32Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:9 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:32Z reason:ProbeError]}" time="2025-11-05T05:38:32Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:9 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:32Z reason:Unhealthy]}" time="2025-11-05T05:38:32Z" level=info msg="event interval matches ProbeErrorConnectionRefused" locator="{Kind map[hmsg:4d86abf7b5 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-6rxkx]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.66:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.66:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:38:32Z lastTimestamp:2025-11-05T05:38:32Z reason:ProbeError]}" time="2025-11-05T05:38:32Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:136c05df94 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-6rxkx]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.66:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.66:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:38:32Z lastTimestamp:2025-11-05T05:38:32Z reason:Unhealthy]}" time="2025-11-05T05:38:35Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:6b30a282ed namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Startup probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:38:35Z lastTimestamp:2025-11-05T05:38:35Z reason:ProbeError]}" time="2025-11-05T05:38:35Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:038a55ce52 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Startup probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[firstTimestamp:2025-11-05T05:38:35Z lastTimestamp:2025-11-05T05:38:35Z reason:Unhealthy]}" time="2025-11-05T05:38:37Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:10 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:37Z reason:ProbeError]}" time="2025-11-05T05:38:37Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:10 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:37Z reason:Unhealthy]}" time="2025-11-05T05:38:40Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-567c95b6d8-mgjvx]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:38:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-8g6lj]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:38:40Z lastTimestamp:2025-11-05T05:38:40Z reason:Unhealthy]}" time="2025-11-05T05:38:42Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:11 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:42Z reason:ProbeError]}" time="2025-11-05T05:38:42Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:11 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:42Z reason:Unhealthy]}" time="2025-11-05T05:38:43Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T04:52:32Z lastTimestamp:2025-11-05T05:38:43Z reason:ProbeError]}" time="2025-11-05T05:38:43Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:2 firstTimestamp:2025-11-05T04:52:32Z lastTimestamp:2025-11-05T05:38:43Z reason:Unhealthy]}" I1105 05:38:43.285105 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:38:45Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:6b30a282ed namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Startup probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:38:35Z lastTimestamp:2025-11-05T05:38:45Z reason:ProbeError]}" time="2025-11-05T05:38:45Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:038a55ce52 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Startup probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:38:35Z lastTimestamp:2025-11-05T05:38:45Z reason:Unhealthy]}" time="2025-11-05T05:38:46Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-8g6lj]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:38:40Z lastTimestamp:2025-11-05T05:38:45Z reason:Unhealthy]}" time="2025-11-05T05:38:47Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:12 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:47Z reason:ProbeError]}" time="2025-11-05T05:38:47Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:12 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:47Z reason:Unhealthy]}" time="2025-11-05T05:38:48Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T04:52:32Z lastTimestamp:2025-11-05T05:38:48Z reason:ProbeError]}" time="2025-11-05T05:38:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:3 firstTimestamp:2025-11-05T04:52:32Z lastTimestamp:2025-11-05T05:38:48Z reason:Unhealthy]}" time="2025-11-05T05:38:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-8g6lj]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:38:40Z lastTimestamp:2025-11-05T05:38:50Z reason:Unhealthy]}" time="2025-11-05T05:38:52Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:13 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:52Z reason:ProbeError]}" time="2025-11-05T05:38:52Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:13 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:52Z reason:Unhealthy]}" time="2025-11-05T05:38:53Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T04:52:32Z lastTimestamp:2025-11-05T05:38:53Z reason:ProbeError]}" time="2025-11-05T05:38:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:4 firstTimestamp:2025-11-05T04:52:32Z lastTimestamp:2025-11-05T05:38:53Z reason:Unhealthy]}" time="2025-11-05T05:38:53Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T04:52:32Z lastTimestamp:2025-11-05T05:38:53Z reason:ProbeError]}" time="2025-11-05T05:38:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:5 firstTimestamp:2025-11-05T04:52:32Z lastTimestamp:2025-11-05T05:38:53Z reason:Unhealthy]}" time="2025-11-05T05:38:55Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:6b30a282ed namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Startup probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:38:35Z lastTimestamp:2025-11-05T05:38:55Z reason:ProbeError]}" time="2025-11-05T05:38:55Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:038a55ce52 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Startup probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:3 firstTimestamp:2025-11-05T05:38:35Z lastTimestamp:2025-11-05T05:38:55Z reason:Unhealthy]}" time="2025-11-05T05:38:56Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-8g6lj]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:38:40Z lastTimestamp:2025-11-05T05:38:55Z reason:Unhealthy]}" time="2025-11-05T05:38:56Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-78bc654c8b-f8pn7]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:38:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:14 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:57Z reason:ProbeError]}" time="2025-11-05T05:38:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:14 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:38:57Z reason:Unhealthy]}" time="2025-11-05T05:39:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-8g6lj]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:38:40Z lastTimestamp:2025-11-05T05:39:00Z reason:Unhealthy]}" time="2025-11-05T05:39:02Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:15 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:39:02Z reason:ProbeError]}" time="2025-11-05T05:39:02Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:15 firstTimestamp:2025-11-05T04:52:24Z lastTimestamp:2025-11-05T05:39:02Z reason:Unhealthy]}" time="2025-11-05T05:39:06Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-8g6lj]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:38:40Z lastTimestamp:2025-11-05T05:39:05Z reason:Unhealthy]}" time="2025-11-05T05:39:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-8g6lj]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:38:40Z lastTimestamp:2025-11-05T05:39:10Z reason:Unhealthy]}" time="2025-11-05T05:39:16Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-8g6lj]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:38:40Z lastTimestamp:2025-11-05T05:39:15Z reason:Unhealthy]}" time="2025-11-05T05:39:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-8g6lj]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:38:40Z lastTimestamp:2025-11-05T05:39:20Z reason:Unhealthy]}" time="2025-11-05T05:39:24Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-7cf6d99599-zs6lh]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:39:26Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-8g6lj]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:38:40Z lastTimestamp:2025-11-05T05:39:25Z reason:Unhealthy]}" time="2025-11-05T05:39:30Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:04f6f1b7ee namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-8g6lj]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.104:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.104:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:39:30Z lastTimestamp:2025-11-05T05:39:30Z reason:ProbeError]}" time="2025-11-05T05:39:30Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:151fb1a567 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-8g6lj]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.104:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.104:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:39:30Z lastTimestamp:2025-11-05T05:39:30Z reason:Unhealthy]}" time="2025-11-05T05:39:38Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-78bc654c8b-nv95f]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:39:41Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5767458856-dcdr6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:39:41Z lastTimestamp:2025-11-05T05:39:41Z reason:Unhealthy]}" time="2025-11-05T05:39:42Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:8 firstTimestamp:2025-11-05T05:39:12Z lastTimestamp:2025-11-05T05:39:42Z reason:ProbeError]}" I1105 05:39:43.594322 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:39:46Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5767458856-dcdr6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:39:41Z lastTimestamp:2025-11-05T05:39:46Z reason:Unhealthy]}" time="2025-11-05T05:39:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5767458856-dcdr6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:39:41Z lastTimestamp:2025-11-05T05:39:51Z reason:Unhealthy]}" time="2025-11-05T05:39:56Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5767458856-dcdr6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:39:41Z lastTimestamp:2025-11-05T05:39:56Z reason:Unhealthy]}" time="2025-11-05T05:39:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:39:58Z lastTimestamp:2025-11-05T05:39:58Z reason:ProbeError]}" time="2025-11-05T05:39:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:feccdf558f namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused map[firstTimestamp:2025-11-05T05:39:58Z lastTimestamp:2025-11-05T05:39:58Z reason:Unhealthy]}" time="2025-11-05T05:39:59Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:39:58Z lastTimestamp:2025-11-05T05:39:59Z reason:ProbeError]}" time="2025-11-05T05:39:59Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:feccdf558f namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:39:58Z lastTimestamp:2025-11-05T05:39:59Z reason:Unhealthy]}" time="2025-11-05T05:40:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5767458856-dcdr6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:39:41Z lastTimestamp:2025-11-05T05:40:01Z reason:Unhealthy]}" time="2025-11-05T05:40:01Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[count:133 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T05:40:01Z reason:ProbeError]}" time="2025-11-05T05:40:06Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5767458856-dcdr6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:39:41Z lastTimestamp:2025-11-05T05:40:06Z reason:Unhealthy]}" time="2025-11-05T05:40:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5767458856-dcdr6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:39:41Z lastTimestamp:2025-11-05T05:40:11Z reason:Unhealthy]}" time="2025-11-05T05:40:16Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5767458856-dcdr6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:39:41Z lastTimestamp:2025-11-05T05:40:16Z reason:Unhealthy]}" time="2025-11-05T05:40:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5767458856-dcdr6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:39:41Z lastTimestamp:2025-11-05T05:40:21Z reason:Unhealthy]}" time="2025-11-05T05:40:23Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:65cd3c913f namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T05:40:23Z reason:ProbeError]}" time="2025-11-05T05:40:23Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d94f36ceca namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused map[firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T05:40:23Z reason:Unhealthy]}" time="2025-11-05T05:40:26Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5767458856-dcdr6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:39:41Z lastTimestamp:2025-11-05T05:40:26Z reason:Unhealthy]}" time="2025-11-05T05:40:27Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/c793e23a-ffa9-4bba-abbc-91f8b3e55ed6 container/etcd mirror-uid/13523fd0d09de28c2fb06ef7bc236ba9" time="2025-11-05T05:40:27Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/c793e23a-ffa9-4bba-abbc-91f8b3e55ed6 container/etcd mirror-uid/13523fd0d09de28c2fb06ef7bc236ba9" time="2025-11-05T05:40:28Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:5d07821b69 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T05:40:28Z reason:ProbeError]}" time="2025-11-05T05:40:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d07f8fa06c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused map[firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T05:40:28Z reason:Unhealthy]}" time="2025-11-05T05:40:28Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:65cd3c913f namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T05:40:28Z reason:ProbeError]}" time="2025-11-05T05:40:28Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d94f36ceca namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T05:40:28Z reason:Unhealthy]}" time="2025-11-05T05:40:28Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/c793e23a-ffa9-4bba-abbc-91f8b3e55ed6 container/etcd mirror-uid/13523fd0d09de28c2fb06ef7bc236ba9" time="2025-11-05T05:40:29Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/c793e23a-ffa9-4bba-abbc-91f8b3e55ed6 container/etcd mirror-uid/13523fd0d09de28c2fb06ef7bc236ba9" time="2025-11-05T05:40:30Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/c793e23a-ffa9-4bba-abbc-91f8b3e55ed6 container/etcd mirror-uid/13523fd0d09de28c2fb06ef7bc236ba9" time="2025-11-05T05:40:31Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/c793e23a-ffa9-4bba-abbc-91f8b3e55ed6 container/etcd mirror-uid/13523fd0d09de28c2fb06ef7bc236ba9" time="2025-11-05T05:40:32Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/c793e23a-ffa9-4bba-abbc-91f8b3e55ed6 container/etcd mirror-uid/13523fd0d09de28c2fb06ef7bc236ba9" time="2025-11-05T05:40:33Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:5d07821b69 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T05:40:33Z reason:ProbeError]}" time="2025-11-05T05:40:33Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d07f8fa06c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T05:40:33Z reason:Unhealthy]}" time="2025-11-05T05:40:33Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:65cd3c913f namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T05:40:33Z reason:ProbeError]}" time="2025-11-05T05:40:33Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d94f36ceca namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused map[count:3 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T05:40:33Z reason:Unhealthy]}" time="2025-11-05T05:40:33Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:65cd3c913f namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T05:40:33Z reason:ProbeError]}" time="2025-11-05T05:40:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d94f36ceca namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused map[count:4 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T05:40:33Z reason:Unhealthy]}" time="2025-11-05T05:40:33Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T05:40:34Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T05:40:35Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T05:40:36Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T05:40:37Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T05:40:38Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:5d07821b69 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T05:40:38Z reason:ProbeError]}" time="2025-11-05T05:40:38Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d07f8fa06c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused map[count:3 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T05:40:38Z reason:Unhealthy]}" time="2025-11-05T05:40:38Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:5d07821b69 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T05:40:38Z reason:ProbeError]}" time="2025-11-05T05:40:38Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d07f8fa06c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused map[count:4 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T05:40:38Z reason:Unhealthy]}" time="2025-11-05T05:40:38Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-7cf6d99599-8jf7f]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:40:38Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-78bc654c8b-k7pz2]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:40:39Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-pwkkg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:40:39Z lastTimestamp:2025-11-05T05:40:39Z reason:Unhealthy]}" time="2025-11-05T05:40:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-697848cdf6-lrhfr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:40:40Z lastTimestamp:2025-11-05T05:40:40Z reason:Unhealthy]}" time="2025-11-05T05:40:43Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5d07821b69 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T05:40:43Z reason:ProbeError]}" time="2025-11-05T05:40:43Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d07f8fa06c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused map[count:5 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T05:40:43Z reason:Unhealthy]}" I1105 05:40:43.980740 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:40:44Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-pwkkg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:40:39Z lastTimestamp:2025-11-05T05:40:44Z reason:Unhealthy]}" time="2025-11-05T05:40:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-697848cdf6-lrhfr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:40:40Z lastTimestamp:2025-11-05T05:40:45Z reason:Unhealthy]}" time="2025-11-05T05:40:49Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-pwkkg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:40:39Z lastTimestamp:2025-11-05T05:40:49Z reason:Unhealthy]}" time="2025-11-05T05:40:50Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-697848cdf6-lrhfr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:40:40Z lastTimestamp:2025-11-05T05:40:50Z reason:Unhealthy]}" time="2025-11-05T05:40:54Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-pwkkg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:40:39Z lastTimestamp:2025-11-05T05:40:54Z reason:Unhealthy]}" time="2025-11-05T05:40:55Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-697848cdf6-lrhfr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:40:40Z lastTimestamp:2025-11-05T05:40:55Z reason:Unhealthy]}" time="2025-11-05T05:40:59Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-pwkkg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:40:39Z lastTimestamp:2025-11-05T05:40:59Z reason:Unhealthy]}" time="2025-11-05T05:41:00Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-697848cdf6-lrhfr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:40:40Z lastTimestamp:2025-11-05T05:41:00Z reason:Unhealthy]}" time="2025-11-05T05:41:04Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-pwkkg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:40:39Z lastTimestamp:2025-11-05T05:41:04Z reason:Unhealthy]}" time="2025-11-05T05:41:05Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-697848cdf6-lrhfr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:40:40Z lastTimestamp:2025-11-05T05:41:05Z reason:Unhealthy]}" time="2025-11-05T05:41:09Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-pwkkg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:40:39Z lastTimestamp:2025-11-05T05:41:09Z reason:Unhealthy]}" time="2025-11-05T05:41:10Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-697848cdf6-lrhfr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:40:40Z lastTimestamp:2025-11-05T05:41:10Z reason:Unhealthy]}" time="2025-11-05T05:41:14Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-pwkkg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:40:39Z lastTimestamp:2025-11-05T05:41:14Z reason:Unhealthy]}" time="2025-11-05T05:41:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-697848cdf6-lrhfr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:40:40Z lastTimestamp:2025-11-05T05:41:15Z reason:Unhealthy]}" time="2025-11-05T05:41:19Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-pwkkg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:40:39Z lastTimestamp:2025-11-05T05:41:19Z reason:Unhealthy]}" time="2025-11-05T05:41:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-697848cdf6-lrhfr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:40:40Z lastTimestamp:2025-11-05T05:41:20Z reason:Unhealthy]}" time="2025-11-05T05:41:24Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-pwkkg]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:40:39Z lastTimestamp:2025-11-05T05:41:24Z reason:Unhealthy]}" time="2025-11-05T05:41:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-697848cdf6-lrhfr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:40:40Z lastTimestamp:2025-11-05T05:41:25Z reason:Unhealthy]}" time="2025-11-05T05:41:29Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:cfd2ca2d4f namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-pwkkg]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.90:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.90:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:41:29Z lastTimestamp:2025-11-05T05:41:29Z reason:ProbeError]}" time="2025-11-05T05:41:29Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:1b012746d6 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-pwkkg]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.90:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.90:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:41:29Z lastTimestamp:2025-11-05T05:41:29Z reason:Unhealthy]}" time="2025-11-05T05:41:30Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5e3cda70d4 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-697848cdf6-lrhfr]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.105:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.105:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:41:30Z lastTimestamp:2025-11-05T05:41:30Z reason:ProbeError]}" time="2025-11-05T05:41:30Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:1c6ccf69cf namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-697848cdf6-lrhfr]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.105:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.105:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:41:30Z lastTimestamp:2025-11-05T05:41:30Z reason:Unhealthy]}" time="2025-11-05T05:41:35Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5e3cda70d4 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-697848cdf6-lrhfr]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.105:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.105:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:41:30Z lastTimestamp:2025-11-05T05:41:35Z reason:ProbeError]}" time="2025-11-05T05:41:35Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1c6ccf69cf namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-697848cdf6-lrhfr]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.105:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.105:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:41:30Z lastTimestamp:2025-11-05T05:41:35Z reason:Unhealthy]}" time="2025-11-05T05:41:40Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5e3cda70d4 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-697848cdf6-lrhfr]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.105:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.105:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:41:30Z lastTimestamp:2025-11-05T05:41:40Z reason:ProbeError]}" I1105 05:41:44.324356 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:41:51Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:41:51Z lastTimestamp:2025-11-05T05:41:51Z reason:ProbeError]}" time="2025-11-05T05:41:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[firstTimestamp:2025-11-05T05:41:51Z lastTimestamp:2025-11-05T05:41:51Z reason:Unhealthy]}" time="2025-11-05T05:42:12Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:32 firstTimestamp:2025-11-05T04:52:07Z lastTimestamp:2025-11-05T05:42:12Z reason:ProbeError]}" time="2025-11-05T05:42:20Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:42:20Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:42:21Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:42:22Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:42:23Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:42:24Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:42:25Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:42:26Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:42:27Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:42:28Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:42:29Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:42:30Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/fed3f614-adac-43f0-8664-870c56b0fa57 container/etcd mirror-uid/3391d6995136986bfa132bac3ac575e2" time="2025-11-05T05:42:31Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/c83a93d6-2853-4e99-a308-4ea80999ac6e container/etcd mirror-uid/ae74ab1be7f1ded5bac81f4d5ad7680a" time="2025-11-05T05:42:32Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/c83a93d6-2853-4e99-a308-4ea80999ac6e container/etcd mirror-uid/ae74ab1be7f1ded5bac81f4d5ad7680a" time="2025-11-05T05:42:33Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/c83a93d6-2853-4e99-a308-4ea80999ac6e container/etcd mirror-uid/ae74ab1be7f1ded5bac81f4d5ad7680a" time="2025-11-05T05:42:34Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/c83a93d6-2853-4e99-a308-4ea80999ac6e container/etcd mirror-uid/ae74ab1be7f1ded5bac81f4d5ad7680a" time="2025-11-05T05:42:35Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/c83a93d6-2853-4e99-a308-4ea80999ac6e container/etcd mirror-uid/ae74ab1be7f1ded5bac81f4d5ad7680a" time="2025-11-05T05:42:39Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-7cf6d99599-g2g5q]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T05:42:44Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-697848cdf6-k6vtb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:42:44Z lastTimestamp:2025-11-05T05:42:44Z reason:Unhealthy]}" I1105 05:42:44.608192 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:42:49Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-697848cdf6-k6vtb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:42:44Z lastTimestamp:2025-11-05T05:42:49Z reason:Unhealthy]}" time="2025-11-05T05:42:54Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-697848cdf6-k6vtb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:42:44Z lastTimestamp:2025-11-05T05:42:54Z reason:Unhealthy]}" time="2025-11-05T05:42:59Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-697848cdf6-k6vtb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:42:44Z lastTimestamp:2025-11-05T05:42:59Z reason:Unhealthy]}" time="2025-11-05T05:43:04Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-697848cdf6-k6vtb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:42:44Z lastTimestamp:2025-11-05T05:43:04Z reason:Unhealthy]}" time="2025-11-05T05:43:09Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-697848cdf6-k6vtb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:42:44Z lastTimestamp:2025-11-05T05:43:09Z reason:Unhealthy]}" time="2025-11-05T05:43:14Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-697848cdf6-k6vtb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:42:44Z lastTimestamp:2025-11-05T05:43:14Z reason:Unhealthy]}" time="2025-11-05T05:43:19Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-697848cdf6-k6vtb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:42:44Z lastTimestamp:2025-11-05T05:43:19Z reason:Unhealthy]}" time="2025-11-05T05:43:24Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-697848cdf6-k6vtb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:42:44Z lastTimestamp:2025-11-05T05:43:24Z reason:Unhealthy]}" time="2025-11-05T05:43:29Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-697848cdf6-k6vtb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:42:44Z lastTimestamp:2025-11-05T05:43:29Z reason:Unhealthy]}" time="2025-11-05T05:43:34Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d5d08ea7cc namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-697848cdf6-k6vtb]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.62:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.62:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:43:34Z lastTimestamp:2025-11-05T05:43:34Z reason:ProbeError]}" time="2025-11-05T05:43:34Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:75b8d33d78 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-697848cdf6-k6vtb]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.62:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.62:8443: connect: connection refused map[firstTimestamp:2025-11-05T05:43:34Z lastTimestamp:2025-11-05T05:43:34Z reason:Unhealthy]}" time="2025-11-05T05:43:39Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d5d08ea7cc namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-697848cdf6-k6vtb]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.62:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.62:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:43:34Z lastTimestamp:2025-11-05T05:43:39Z reason:ProbeError]}" time="2025-11-05T05:43:39Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:75b8d33d78 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-697848cdf6-k6vtb]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.62:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.62:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:43:34Z lastTimestamp:2025-11-05T05:43:39Z reason:Unhealthy]}" time="2025-11-05T05:43:40Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[firstTimestamp:2025-11-05T05:43:40Z lastTimestamp:2025-11-05T05:43:40Z reason:ProbeError]}" time="2025-11-05T05:43:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:33 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:43:40Z reason:Unhealthy]}" time="2025-11-05T05:43:44Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d5d08ea7cc namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-697848cdf6-k6vtb]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.62:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.62:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:43:34Z lastTimestamp:2025-11-05T05:43:44Z reason:ProbeError]}" I1105 05:43:44.861807 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:43:45Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:2 firstTimestamp:2025-11-05T05:43:40Z lastTimestamp:2025-11-05T05:43:45Z reason:ProbeError]}" time="2025-11-05T05:43:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:34 firstTimestamp:2025-11-05T05:14:20Z lastTimestamp:2025-11-05T05:43:45Z reason:Unhealthy]}" time="2025-11-05T05:43:53Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:43:53Z reason:ProbeError]}" time="2025-11-05T05:43:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:43:53Z reason:Unhealthy]}" time="2025-11-05T05:43:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:43:58Z reason:ProbeError]}" time="2025-11-05T05:43:58Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:2 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:43:58Z reason:Unhealthy]}" time="2025-11-05T05:44:03Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:44:03Z reason:ProbeError]}" time="2025-11-05T05:44:03Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:3 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:44:03Z reason:Unhealthy]}" time="2025-11-05T05:44:03Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:44:03Z reason:ProbeError]}" time="2025-11-05T05:44:03Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:4 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:44:03Z reason:Unhealthy]}" time="2025-11-05T05:44:08Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:44:08Z reason:ProbeError]}" time="2025-11-05T05:44:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:5 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:44:08Z reason:Unhealthy]}" time="2025-11-05T05:44:13Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:44:13Z reason:ProbeError]}" time="2025-11-05T05:44:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:6 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:44:13Z reason:Unhealthy]}" time="2025-11-05T05:44:18Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:7 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:44:18Z reason:ProbeError]}" time="2025-11-05T05:44:18Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:7 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:44:18Z reason:Unhealthy]}" time="2025-11-05T05:44:18Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/e83b5168-60d0-4638-bf72-d3e42dcbf56e container/etcd mirror-uid/8a2f7410bb740c6451c462467e6eb02b" time="2025-11-05T05:44:19Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T05:44:20Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:10 firstTimestamp:2025-11-05T05:43:40Z lastTimestamp:2025-11-05T05:44:20Z reason:ProbeError]}" time="2025-11-05T05:44:20Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T05:44:21Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T05:44:22Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T05:44:23Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:8 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:44:23Z reason:ProbeError]}" time="2025-11-05T05:44:23Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:8 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T05:44:23Z reason:Unhealthy]}" time="2025-11-05T05:44:23Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T05:44:24Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T05:44:33Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:03651b1d66 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nbody: \n map[firstTimestamp:2025-11-05T05:44:33Z lastTimestamp:2025-11-05T05:44:33Z reason:ProbeError]}" time="2025-11-05T05:44:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:633685d6c2 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers) map[firstTimestamp:2025-11-05T05:44:33Z lastTimestamp:2025-11-05T05:44:33Z reason:Unhealthy]}" time="2025-11-05T05:44:38Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:f9c18c043b namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": context deadline exceeded\nbody: \n map[firstTimestamp:2025-11-05T05:44:38Z lastTimestamp:2025-11-05T05:44:38Z reason:ProbeError]}" time="2025-11-05T05:44:38Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:53b0411d30 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": context deadline exceeded map[firstTimestamp:2025-11-05T05:44:38Z lastTimestamp:2025-11-05T05:44:38Z reason:Unhealthy]}" time="2025-11-05T05:44:43Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:b43609d2bf namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[firstTimestamp:2025-11-05T05:44:43Z lastTimestamp:2025-11-05T05:44:43Z reason:ProbeError]}" time="2025-11-05T05:44:43Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:e9a40d76a6 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers) map[firstTimestamp:2025-11-05T05:44:43Z lastTimestamp:2025-11-05T05:44:43Z reason:Unhealthy]}" I1105 05:44:45.133748 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:44:48Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:f9c18c043b namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": context deadline exceeded\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:44:38Z lastTimestamp:2025-11-05T05:44:48Z reason:ProbeError]}" time="2025-11-05T05:44:48Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:53b0411d30 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": context deadline exceeded map[count:2 firstTimestamp:2025-11-05T05:44:38Z lastTimestamp:2025-11-05T05:44:48Z reason:Unhealthy]}" time="2025-11-05T05:44:53Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:b43609d2bf namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:2 firstTimestamp:2025-11-05T05:44:43Z lastTimestamp:2025-11-05T05:44:53Z reason:ProbeError]}" I1105 05:45:45.416815 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:46:43Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:46:43Z reason:ProbeError]}" time="2025-11-05T05:46:43Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:46:43Z reason:Unhealthy]}" I1105 05:46:45.715267 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T05:46:48Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:2 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:46:48Z reason:ProbeError]}" time="2025-11-05T05:46:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:46:48Z reason:Unhealthy]}" time="2025-11-05T05:46:53Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:3 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:46:53Z reason:ProbeError]}" time="2025-11-05T05:46:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:46:53Z reason:Unhealthy]}" time="2025-11-05T05:46:53Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:4 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:46:53Z reason:ProbeError]}" time="2025-11-05T05:46:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:46:53Z reason:Unhealthy]}" time="2025-11-05T05:46:58Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:5 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:46:58Z reason:ProbeError]}" time="2025-11-05T05:46:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:46:58Z reason:Unhealthy]}" time="2025-11-05T05:47:03Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:6 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:47:03Z reason:ProbeError]}" time="2025-11-05T05:47:03Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:47:03Z reason:Unhealthy]}" time="2025-11-05T05:47:08Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:7 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:47:08Z reason:ProbeError]}" time="2025-11-05T05:47:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:47:08Z reason:Unhealthy]}" time="2025-11-05T05:47:13Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:8 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:47:13Z reason:ProbeError]}" time="2025-11-05T05:47:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:47:13Z reason:Unhealthy]}" time="2025-11-05T05:47:18Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:9 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:47:18Z reason:ProbeError]}" time="2025-11-05T05:47:18Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:47:18Z reason:Unhealthy]}" time="2025-11-05T05:47:23Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:10 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:47:23Z reason:ProbeError]}" time="2025-11-05T05:47:23Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:47:23Z reason:Unhealthy]}" time="2025-11-05T05:47:28Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:11 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:47:28Z reason:ProbeError]}" time="2025-11-05T05:47:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:11 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:47:28Z reason:Unhealthy]}" time="2025-11-05T05:47:33Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:12 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:47:33Z reason:ProbeError]}" time="2025-11-05T05:47:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:12 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:47:33Z reason:Unhealthy]}" time="2025-11-05T05:47:38Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:13 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T05:47:38Z reason:ProbeError]}" I1105 05:47:47.055738 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 05:48:47.304211 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 05:49:47.832284 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 05:50:50.176810 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' passed: (17m17s) 2025-11-05T05:51:04 "[sig-etcd][Feature:DisasterRecovery][Suite:openshift/etcd/recovery][Timeout:1h] [Feature:EtcdRecovery][Disruptive] Recover with quorum restore [Serial]" started: 22/32/55 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:PinnedImages][Disruptive] All Nodes in a standard Pool should have the PinnedImages PIS [apigroup:machineconfiguration.openshift.io] [Serial]" passed: (30s) 2025-11-05T05:51:35 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:PinnedImages][Disruptive] All Nodes in a standard Pool should have the PinnedImages PIS [apigroup:machineconfiguration.openshift.io] [Serial]" started: 22/33/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO password PolarionID:72137-Create a password for a user different from 'core' user" I1105 05:51:50.456181 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 05:52:50.694235 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' passed: (1m44s) 2025-11-05T05:53:20 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO password PolarionID:72137-Create a password for a user different from 'core' user" started: 22/34/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][OCPFeatureGate:ManagedBootImagesAzure] Should not update boot images on any MachineSet when not configured [apigroup:machineconfiguration.openshift.io]" skip [github.com/openshift/machine-config-operator/test/extended/boot_image.go:40]: This test only applies to Azure platform skipped: (9.3s) 2025-11-05T05:53:30 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][OCPFeatureGate:ManagedBootImagesAzure] Should not update boot images on any MachineSet when not configured [apigroup:machineconfiguration.openshift.io]" started: 22/35/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO ocb PolarionID:83141-A valid MachineOSConfig leads to a successful MachineOSBuild and cleanup of its associated resources" I1105 05:53:51.052042 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 05:54:51.314044 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 05:55:51.860989 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 05:56:52.138821 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 05:57:52.618555 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 05:58:52.868257 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 05:59:53.143287 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 06:00:53.397316 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 06:01:53.691822 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 06:02:53.957906 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' passed: (9m27s) 2025-11-05T06:02:57 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO ocb PolarionID:83141-A valid MachineOSConfig leads to a successful MachineOSBuild and cleanup of its associated resources" started: 22/36/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO password PolarionID:62533-Passwd login must not work with ssh" time="2025-11-05T06:03:19Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:7b8a9d1986 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-3ec5c1a5610a280abc5d00173d5874ee map[firstTimestamp:2025-11-05T06:03:19Z lastTimestamp:2025-11-05T06:03:19Z reason:SetDesiredConfig]}" time="2025-11-05T06:03:46Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:ce38458c47 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt to MachineConfig: rendered-worker-3ec5c1a5610a280abc5d00173d5874ee map[firstTimestamp:2025-11-05T06:03:46Z lastTimestamp:2025-11-05T06:03:46Z reason:SetDesiredConfig]}" I1105 06:03:54.235023 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:04:17Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:f37c9d1b2a machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr to MachineConfig: rendered-worker-3ec5c1a5610a280abc5d00173d5874ee map[firstTimestamp:2025-11-05T06:04:17Z lastTimestamp:2025-11-05T06:04:17Z reason:SetDesiredConfig]}" I1105 06:04:54.545617 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:05:27Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:83768cdc76 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:4 firstTimestamp:2025-11-05T05:20:27Z lastTimestamp:2025-11-05T06:05:27Z reason:SetDesiredConfig]}" I1105 06:05:54.816019 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:05:55Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:66d66c84b6 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:3 firstTimestamp:2025-11-05T05:20:55Z lastTimestamp:2025-11-05T06:05:55Z reason:SetDesiredConfig]}" time="2025-11-05T06:06:26Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:16a31e5783 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:3 firstTimestamp:2025-11-05T05:21:26Z lastTimestamp:2025-11-05T06:06:26Z reason:SetDesiredConfig]}" I1105 06:06:55.148784 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' passed: (4m21s) 2025-11-05T06:07:18 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO password PolarionID:62533-Passwd login must not work with ssh" started: 22/37/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][OCPFeatureGate:ManagedBootImagesAzure] Should update boot images only on MachineSets that are opted in [apigroup:machineconfiguration.openshift.io]" skip [github.com/openshift/machine-config-operator/test/extended/boot_image.go:40]: This test only applies to Azure platform skipped: (7.1s) 2025-11-05T06:07:26 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][OCPFeatureGate:ManagedBootImagesAzure] Should update boot images only on MachineSets that are opted in [apigroup:machineconfiguration.openshift.io]" started: 22/38/55 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:PinnedImages][Disruptive] Invalid PIS leads to degraded MCN in a custom Pool [apigroup:machineconfiguration.openshift.io] [Serial]" time="2025-11-05T06:07:38Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:099d4d3fd3 machineconfigpool:custom namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-custom-68e6c340dbef76691f081bbf7159850a map[firstTimestamp:2025-11-05T06:07:38Z lastTimestamp:2025-11-05T06:07:38Z reason:SetDesiredConfig]}" I1105 06:07:55.398548 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:08:19Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:83768cdc76 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:5 firstTimestamp:2025-11-05T05:20:27Z lastTimestamp:2025-11-05T06:08:19Z reason:SetDesiredConfig]}" passed: (1m16s) 2025-11-05T06:08:44 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:PinnedImages][Disruptive] Invalid PIS leads to degraded MCN in a custom Pool [apigroup:machineconfiguration.openshift.io] [Serial]" started: 22/39/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO ocb PolarionID:83138-A MachineOSConfig fails to apply or degrades if invalid inputs are given" I1105 06:08:55.740832 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 06:09:55.968932 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 06:10:56.286840 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' passed: (2m14s) 2025-11-05T06:10:58 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO ocb PolarionID:83138-A MachineOSConfig fails to apply or degrades if invalid inputs are given" started: 22/40/55 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:ManagedBootImagesvSphere][Serial] Should stamp coreos-bootimages configmap with current MCO hash and release version [apigroup:machineconfiguration.openshift.io]" skip [github.com/openshift/origin/test/extended/machine_config/helpers.go:56]: This test only applies to VSphere platform skipped: (5.9s) 2025-11-05T06:11:05 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:ManagedBootImagesvSphere][Serial] Should stamp coreos-bootimages configmap with current MCO hash and release version [apigroup:machineconfiguration.openshift.io]" started: 22/41/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO password PolarionID:75552-apply ssh keys when root owns .ssh" time="2025-11-05T06:11:35Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:7038ce0ce4 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-5dbc9da8c0c63c7c59f89d73cd41c9e2 map[firstTimestamp:2025-11-05T06:11:35Z lastTimestamp:2025-11-05T06:11:35Z reason:SetDesiredConfig]}" I1105 06:11:56.626752 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:12:02Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:742a00d707 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt to MachineConfig: rendered-worker-5dbc9da8c0c63c7c59f89d73cd41c9e2 map[firstTimestamp:2025-11-05T06:12:02Z lastTimestamp:2025-11-05T06:12:02Z reason:SetDesiredConfig]}" time="2025-11-05T06:12:33Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:641ff23f9c machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr to MachineConfig: rendered-worker-5dbc9da8c0c63c7c59f89d73cd41c9e2 map[firstTimestamp:2025-11-05T06:12:33Z lastTimestamp:2025-11-05T06:12:33Z reason:SetDesiredConfig]}" I1105 06:12:56.937923 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:13:56Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:83768cdc76 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:6 firstTimestamp:2025-11-05T05:20:27Z lastTimestamp:2025-11-05T06:13:56Z reason:SetDesiredConfig]}" I1105 06:13:57.239104 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:14:22Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:66d66c84b6 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:4 firstTimestamp:2025-11-05T05:20:55Z lastTimestamp:2025-11-05T06:14:22Z reason:SetDesiredConfig]}" time="2025-11-05T06:14:54Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:16a31e5783 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:4 firstTimestamp:2025-11-05T05:21:26Z lastTimestamp:2025-11-05T06:14:54Z reason:SetDesiredConfig]}" I1105 06:14:57.519186 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' passed: (4m40s) 2025-11-05T06:15:46 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO password PolarionID:75552-apply ssh keys when root owns .ssh" started: 22/42/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][OCPFeatureGate:ManagedBootImagesAzure] Should stamp coreos-bootimages configmap with current MCO hash and release version [apigroup:machineconfiguration.openshift.io]" skip [github.com/openshift/machine-config-operator/test/extended/boot_image.go:40]: This test only applies to Azure platform skipped: (6.7s) 2025-11-05T06:15:53 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][OCPFeatureGate:ManagedBootImagesAzure] Should stamp coreos-bootimages configmap with current MCO hash and release version [apigroup:machineconfiguration.openshift.io]" started: 22/43/55 "[sig-mco][OCPFeatureGate:MachineConfigNodes] [Suite:openshift/machine-config-operator/disruptive][Disruptive][Slow]Should properly create and remove MCN on node creation and deletion [apigroup:machineconfiguration.openshift.io] [Serial]" I1105 06:15:57.829056 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 06:16:58.089817 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 06:17:58.391083 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:18:54Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:5ef0e1389e node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T06:18:54Z lastTimestamp:2025-11-05T06:18:54Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T06:18:54Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:f7b9106824 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T06:18:54Z lastTimestamp:2025-11-05T06:18:54Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T06:18:54Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:4fa32495cc node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T06:18:54Z lastTimestamp:2025-11-05T06:18:54Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T06:18:54Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[daemonset:loki-promtail hmsg:a9098e9750 namespace:openshift-e2e-loki]}" message="{SuccessfulCreate Created pod: loki-promtail-4tfpm map[firstTimestamp:2025-11-05T06:18:54Z lastTimestamp:2025-11-05T06:18:54Z reason:SuccessfulCreate]}" time="2025-11-05T06:18:54Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:5ef0e1389e node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T06:18:54Z lastTimestamp:2025-11-05T06:18:54Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T06:18:54Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:f7b9106824 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T06:18:54Z lastTimestamp:2025-11-05T06:18:54Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T06:18:54Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:d22e92958e namespace:openshift-e2e-loki pod:loki-promtail-4tfpm]}" message="{Scheduled Successfully assigned openshift-e2e-loki/loki-promtail-4tfpm to ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:Scheduled]}" time="2025-11-05T06:18:54Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:4fa32495cc node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T06:18:54Z lastTimestamp:2025-11-05T06:18:54Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T06:18:54Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:5ef0e1389e node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n status is now: NodeHasSufficientMemory map[count:4 firstTimestamp:2025-11-05T06:18:54Z lastTimestamp:2025-11-05T06:18:54Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T06:18:54Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:f7b9106824 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n status is now: NodeHasNoDiskPressure map[count:4 firstTimestamp:2025-11-05T06:18:54Z lastTimestamp:2025-11-05T06:18:54Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T06:18:54Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:4fa32495cc node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n status is now: NodeHasSufficientPID map[count:4 firstTimestamp:2025-11-05T06:18:54Z lastTimestamp:2025-11-05T06:18:54Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T06:18:56Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:56Z reason:NetworkNotReady]}" time="2025-11-05T06:18:56Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:56Z reason:FailedMount]}" time="2025-11-05T06:18:56Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:56Z reason:FailedMount]}" time="2025-11-05T06:18:56Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:56Z reason:FailedMount]}" time="2025-11-05T06:18:56Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:f71c189d5f namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-bt5vl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:56Z reason:FailedMount]}" time="2025-11-05T06:18:56Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:56Z reason:FailedMount]}" time="2025-11-05T06:18:56Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:56Z reason:FailedMount]}" time="2025-11-05T06:18:56Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:56Z reason:FailedMount]}" time="2025-11-05T06:18:56Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:f71c189d5f namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-bt5vl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:56Z reason:FailedMount]}" time="2025-11-05T06:18:57Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:57Z reason:NetworkNotReady]}" time="2025-11-05T06:18:57Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:57Z reason:FailedMount]}" time="2025-11-05T06:18:57Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:57Z reason:FailedMount]}" time="2025-11-05T06:18:57Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:57Z reason:FailedMount]}" time="2025-11-05T06:18:57Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:f71c189d5f namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-bt5vl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:57Z reason:FailedMount]}" I1105 06:18:58.671110 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:18:59Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:59Z reason:NetworkNotReady]}" time="2025-11-05T06:18:59Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:59Z reason:FailedMount]}" time="2025-11-05T06:18:59Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:59Z reason:FailedMount]}" time="2025-11-05T06:18:59Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:59Z reason:FailedMount]}" time="2025-11-05T06:18:59Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:f71c189d5f namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-bt5vl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:18:59Z reason:FailedMount]}" time="2025-11-05T06:19:01Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:19:01Z reason:NetworkNotReady]}" time="2025-11-05T06:19:03Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:19:03Z reason:NetworkNotReady]}" time="2025-11-05T06:19:03Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:19:03Z reason:FailedMount]}" time="2025-11-05T06:19:03Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:19:03Z reason:FailedMount]}" time="2025-11-05T06:19:03Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:19:03Z reason:FailedMount]}" time="2025-11-05T06:19:03Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:f71c189d5f namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:loki-promtail-4tfpm]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-bt5vl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T06:18:56Z lastTimestamp:2025-11-05T06:19:03Z reason:FailedMount]}" time="2025-11-05T06:19:41Z" level=info msg="event interval matches CertificateRotation" locator="{Kind map[certificatesigningrequest:csr-ct46r hmsg:eb7d83a467]}" message="{CSRApproved CSR \"csr-ct46r\" has been approved map[firstTimestamp:2025-11-05T06:19:41Z interesting:true lastTimestamp:2025-11-05T06:19:41Z reason:CSRApproved]}" time="2025-11-05T06:19:47Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:16ff93008b namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:gcp-pd-csi-driver-node-cxz7c]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.5:10303/healthz\": dial tcp 10.0.128.5:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:19:47Z lastTimestamp:2025-11-05T06:19:47Z reason:ProbeError]}" time="2025-11-05T06:19:47Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:8817a9d497 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n pod:gcp-pd-csi-driver-node-cxz7c]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.5:10303/healthz\": dial tcp 10.0.128.5:10303: connect: connection refused map[firstTimestamp:2025-11-05T06:19:47Z lastTimestamp:2025-11-05T06:19:47Z reason:Unhealthy]}" time="2025-11-05T06:19:48Z" level=info msg="event interval matches CertificateRotation" locator="{Kind map[certificatesigningrequest:csr-w7sgq hmsg:735efe2f98]}" message="{CSRApproved CSR \"csr-w7sgq\" has been approved map[firstTimestamp:2025-11-05T06:19:48Z interesting:true lastTimestamp:2025-11-05T06:19:48Z reason:CSRApproved]}" time="2025-11-05T06:19:49Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:aff7c416a6 namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (6 endpoints, 3 zones), addressType: IPv4 map[firstTimestamp:2025-11-05T06:19:49Z lastTimestamp:2025-11-05T06:19:49Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T06:19:49Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:aff7c416a6 namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (6 endpoints, 3 zones), addressType: IPv4 map[count:2 firstTimestamp:2025-11-05T06:19:49Z lastTimestamp:2025-11-05T06:19:49Z reason:TopologyAwareHintsDisabled]}" I1105 06:19:58.929771 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' passed: (4m42s) 2025-11-05T06:20:36 "[sig-mco][OCPFeatureGate:MachineConfigNodes] [Suite:openshift/machine-config-operator/disruptive][Disruptive][Slow]Should properly create and remove MCN on node creation and deletion [apigroup:machineconfiguration.openshift.io] [Serial]" started: 22/44/55 "[sig-etcd][Feature:DisasterRecovery][Suite:openshift/etcd/recovery][Timeout:2h] [Feature:EtcdRecovery][Disruptive] Recover with snapshot with two unhealthy nodes and lost quorum [Serial]" time="2025-11-05T06:20:37Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:98d0c59efc namespace:openshift-monitoring service:node-exporter]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-monitoring/node-exporter: skipping Pod node-exporter-96w9t for Service openshift-monitoring/node-exporter: Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n Not Found map[firstTimestamp:2025-11-05T06:20:37Z lastTimestamp:2025-11-05T06:20:37Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T06:20:37Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:f1791e69e9 namespace:openshift-machine-config-operator service:machine-config-daemon]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-machine-config-operator/machine-config-daemon: skipping Pod machine-config-daemon-4h999 for Service openshift-machine-config-operator/machine-config-daemon: Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n Not Found map[firstTimestamp:2025-11-05T06:20:37Z lastTimestamp:2025-11-05T06:20:37Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T06:20:38Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:f1791e69e9 namespace:openshift-machine-config-operator service:machine-config-daemon]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-machine-config-operator/machine-config-daemon: skipping Pod machine-config-daemon-4h999 for Service openshift-machine-config-operator/machine-config-daemon: Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n Not Found map[count:2 firstTimestamp:2025-11-05T06:20:37Z lastTimestamp:2025-11-05T06:20:38Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T06:20:38Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:98d0c59efc namespace:openshift-monitoring service:node-exporter]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-monitoring/node-exporter: skipping Pod node-exporter-96w9t for Service openshift-monitoring/node-exporter: Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n Not Found map[count:2 firstTimestamp:2025-11-05T06:20:37Z lastTimestamp:2025-11-05T06:20:38Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T06:20:40Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:f1791e69e9 namespace:openshift-machine-config-operator service:machine-config-daemon]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-machine-config-operator/machine-config-daemon: skipping Pod machine-config-daemon-4h999 for Service openshift-machine-config-operator/machine-config-daemon: Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n Not Found map[count:3 firstTimestamp:2025-11-05T06:20:37Z lastTimestamp:2025-11-05T06:20:40Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T06:20:40Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:98d0c59efc namespace:openshift-monitoring service:node-exporter]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-monitoring/node-exporter: skipping Pod node-exporter-96w9t for Service openshift-monitoring/node-exporter: Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n Not Found map[count:3 firstTimestamp:2025-11-05T06:20:37Z lastTimestamp:2025-11-05T06:20:40Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T06:20:44Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:f1791e69e9 namespace:openshift-machine-config-operator service:machine-config-daemon]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-machine-config-operator/machine-config-daemon: skipping Pod machine-config-daemon-4h999 for Service openshift-machine-config-operator/machine-config-daemon: Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n Not Found map[count:4 firstTimestamp:2025-11-05T06:20:37Z lastTimestamp:2025-11-05T06:20:44Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T06:20:44Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:98d0c59efc namespace:openshift-monitoring service:node-exporter]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-monitoring/node-exporter: skipping Pod node-exporter-96w9t for Service openshift-monitoring/node-exporter: Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n Not Found map[count:4 firstTimestamp:2025-11-05T06:20:37Z lastTimestamp:2025-11-05T06:20:44Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T06:20:52Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:98d0c59efc namespace:openshift-monitoring service:node-exporter]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-monitoring/node-exporter: skipping Pod node-exporter-96w9t for Service openshift-monitoring/node-exporter: Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n Not Found map[count:5 firstTimestamp:2025-11-05T06:20:37Z lastTimestamp:2025-11-05T06:20:52Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T06:20:52Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:f1791e69e9 namespace:openshift-machine-config-operator service:machine-config-daemon]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-machine-config-operator/machine-config-daemon: skipping Pod machine-config-daemon-4h999 for Service openshift-machine-config-operator/machine-config-daemon: Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n Not Found map[count:5 firstTimestamp:2025-11-05T06:20:37Z lastTimestamp:2025-11-05T06:20:52Z reason:FailedToUpdateEndpointSlices]}" I1105 06:20:59.172262 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:21:08Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:98d0c59efc namespace:openshift-monitoring service:node-exporter]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-monitoring/node-exporter: skipping Pod node-exporter-96w9t for Service openshift-monitoring/node-exporter: Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n Not Found map[count:6 firstTimestamp:2025-11-05T06:20:37Z lastTimestamp:2025-11-05T06:21:08Z reason:FailedToUpdateEndpointSlices]}" time="2025-11-05T06:21:08Z" level=info msg="event interval matches ErrorUpdatingEndpointSlices" locator="{Kind map[hmsg:f1791e69e9 namespace:openshift-machine-config-operator service:machine-config-daemon]}" message="{FailedToUpdateEndpointSlices Error updating Endpoint Slices for Service openshift-machine-config-operator/machine-config-daemon: skipping Pod machine-config-daemon-4h999 for Service openshift-machine-config-operator/machine-config-daemon: Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-hrm5n Not Found map[count:6 firstTimestamp:2025-11-05T06:20:37Z lastTimestamp:2025-11-05T06:21:08Z reason:FailedToUpdateEndpointSlices]}" I1105 06:21:59.436734 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:22:43Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:22:43Z lastTimestamp:2025-11-05T06:22:43Z reason:ProbeError]}" time="2025-11-05T06:22:43Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[firstTimestamp:2025-11-05T06:22:43Z lastTimestamp:2025-11-05T06:22:43Z reason:Unhealthy]}" time="2025-11-05T06:22:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[count:142 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T06:22:46Z reason:ProbeError]}" time="2025-11-05T06:22:46Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:22:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:feccdf558f namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused map[count:142 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T06:22:46Z reason:Unhealthy]}" time="2025-11-05T06:22:46Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:22:47Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:22:48Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:9 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T06:22:48Z reason:ProbeError]}" time="2025-11-05T06:22:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:9 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T06:22:48Z reason:Unhealthy]}" I1105 06:22:59.701884 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' E1105 06:23:48.620536 1669 pod_log_streamer.go:94] "Unhandled Error" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" E1105 06:23:54.617148 1669 pod_log_streamer.go:94] "Unhandled Error" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" E1105 06:23:55.870153 1669 pod_log_streamer.go:94] "Unhandled Error" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" I1105 06:23:59.887248 1669 client.go:1078] Error running oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all: StdOut> Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterversions.config.openshift.io version) StdErr> Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterversions.config.openshift.io version) I1105 06:23:59.887498 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' E1105 06:24:48.626160 1669 pod_log_streamer.go:94] "Unhandled Error" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" E1105 06:24:54.621456 1669 pod_log_streamer.go:94] "Unhandled Error" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" E1105 06:24:55.873491 1669 pod_log_streamer.go:94] "Unhandled Error" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" I1105 06:25:00.116553 1669 client.go:1078] Error running oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all: StdOut> Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterversions.config.openshift.io version) StdErr> Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterversions.config.openshift.io version) time="2025-11-05T06:25:45Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:49078f4b39 namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T06:25:30Z lastTimestamp:2025-11-05T06:25:42Z reason:ProbeError]}" time="2025-11-05T06:25:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:576a6317bf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[count:5 firstTimestamp:2025-11-05T06:25:30Z lastTimestamp:2025-11-05T06:25:42Z reason:Unhealthy]}" time="2025-11-05T06:25:45Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:0a25ac891c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Liveness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:25:42Z lastTimestamp:2025-11-05T06:25:42Z reason:ProbeError]}" time="2025-11-05T06:25:45Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a75ee11441 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Liveness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[firstTimestamp:2025-11-05T06:25:42Z lastTimestamp:2025-11-05T06:25:42Z reason:Unhealthy]}" time="2025-11-05T06:25:45Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:23 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T06:25:43Z reason:ProbeError]}" time="2025-11-05T06:25:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:23 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T06:25:43Z reason:Unhealthy]}" time="2025-11-05T06:25:45Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:25:43Z lastTimestamp:2025-11-05T06:25:43Z reason:ProbeError]}" time="2025-11-05T06:25:45Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[firstTimestamp:2025-11-05T06:25:43Z lastTimestamp:2025-11-05T06:25:43Z reason:Unhealthy]}" time="2025-11-05T06:25:45Z" level=info msg="event interval matches ProbeErrorConnectionRefused" locator="{Kind map[hmsg:49078f4b39 namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T06:25:30Z lastTimestamp:2025-11-05T06:25:45Z reason:ProbeError]}" time="2025-11-05T06:25:45Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:576a6317bf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[count:6 firstTimestamp:2025-11-05T06:25:30Z lastTimestamp:2025-11-05T06:25:45Z reason:Unhealthy]}" time="2025-11-05T06:25:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:0fe27f95b2 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T06:25:16Z lastTimestamp:2025-11-05T06:25:46Z reason:ProbeError]}" time="2025-11-05T06:25:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:649a9ff6eb namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused map[count:4 firstTimestamp:2025-11-05T06:25:16Z lastTimestamp:2025-11-05T06:25:46Z reason:Unhealthy]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:24 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T06:25:48Z reason:ProbeError]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:24 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T06:25:48Z reason:Unhealthy]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:bfb625e3fa namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{ProbeError Readiness probe error: Get \"http://10.129.0.14:8081/readyz\": dial tcp 10.129.0.14:8081: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:25:48Z lastTimestamp:2025-11-05T06:25:48Z reason:ProbeError]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a239981af6 namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{Unhealthy Readiness probe failed: Get \"http://10.129.0.14:8081/readyz\": dial tcp 10.129.0.14:8081: connect: connection refused map[firstTimestamp:2025-11-05T06:25:48Z lastTimestamp:2025-11-05T06:25:48Z reason:Unhealthy]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:34055d48be namespace:openshift-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:controller-manager-6848447799-p7xgz]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.14:8443/healthz\": dial tcp 10.131.2.14:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:48Z reason:ProbeError]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:929b820f4f namespace:openshift-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:controller-manager-6848447799-p7xgz]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.14:8443/healthz\": dial tcp 10.131.2.14:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:48Z reason:ProbeError]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:fb4b81ceae namespace:openshift-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:controller-manager-6848447799-p7xgz]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.14:8443/healthz\": dial tcp 10.131.2.14:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:48Z reason:Unhealthy]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:df653d738a namespace:openshift-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:controller-manager-6848447799-p7xgz]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.14:8443/healthz\": dial tcp 10.131.2.14:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:48Z reason:Unhealthy]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:49078f4b39 namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:7 firstTimestamp:2025-11-05T06:25:30Z lastTimestamp:2025-11-05T06:25:48Z reason:ProbeError]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:576a6317bf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[count:7 firstTimestamp:2025-11-05T06:25:30Z lastTimestamp:2025-11-05T06:25:48Z reason:Unhealthy]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:247a206f9e namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:48Z reason:ProbeError]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a6aa2ad388 namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:48Z reason:Unhealthy]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:11e3e3da27 namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:marketplace-operator-65754d8564-dptvk]}" message="{ProbeError Readiness probe error: Get \"http://10.131.2.32:8080/healthz\": dial tcp 10.131.2.32:8080: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T06:24:58Z lastTimestamp:2025-11-05T06:25:48Z reason:ProbeError]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:44ca77887c namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:marketplace-operator-65754d8564-dptvk]}" message="{Unhealthy Readiness probe failed: Get \"http://10.131.2.32:8080/healthz\": dial tcp 10.131.2.32:8080: connect: connection refused map[count:6 firstTimestamp:2025-11-05T06:24:58Z lastTimestamp:2025-11-05T06:25:48Z reason:Unhealthy]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:6c34c36370 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.49:8443/healthz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:48Z reason:ProbeError]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:cadbff4a67 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.49:8443/healthz\": dial tcp 10.131.2.49:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:48Z reason:Unhealthy]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2976e363a4 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:48Z reason:ProbeError]}" time="2025-11-05T06:25:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:e6879c25f1 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:48Z reason:Unhealthy]}" E1105 06:25:48.630090 1669 pod_log_streamer.go:94] "Unhandled Error" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" time="2025-11-05T06:25:51Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a3b940479c namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{ProbeError Liveness probe error: Get \"http://10.129.0.14:8081/healthz\": dial tcp 10.129.0.14:8081: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:25:51Z lastTimestamp:2025-11-05T06:25:51Z reason:ProbeError]}" time="2025-11-05T06:25:51Z" level=info msg="event interval matches ProbeErrorConnectionRefused" locator="{Kind map[hmsg:49078f4b39 namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:8 firstTimestamp:2025-11-05T06:25:30Z lastTimestamp:2025-11-05T06:25:51Z reason:ProbeError]}" time="2025-11-05T06:25:51Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:20c0c4e5c5 namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{Unhealthy Liveness probe failed: Get \"http://10.129.0.14:8081/healthz\": dial tcp 10.129.0.14:8081: connect: connection refused map[firstTimestamp:2025-11-05T06:25:51Z lastTimestamp:2025-11-05T06:25:51Z reason:Unhealthy]}" time="2025-11-05T06:25:51Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:576a6317bf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[count:8 firstTimestamp:2025-11-05T06:25:30Z lastTimestamp:2025-11-05T06:25:51Z reason:Unhealthy]}" time="2025-11-05T06:25:52Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:8424594a2d namespace:openshift-operator-lifecycle-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:package-server-manager-6cfb5fcd44-s6665]}" message="{ProbeError Liveness probe error: Get \"http://10.130.2.48:8080/healthz\": dial tcp 10.130.2.48:8080: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:25:52Z lastTimestamp:2025-11-05T06:25:52Z reason:ProbeError]}" time="2025-11-05T06:25:52Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:ee5275810c namespace:openshift-operator-lifecycle-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:package-server-manager-6cfb5fcd44-s6665]}" message="{Unhealthy Liveness probe failed: Get \"http://10.130.2.48:8080/healthz\": dial tcp 10.130.2.48:8080: connect: connection refused map[firstTimestamp:2025-11-05T06:25:52Z lastTimestamp:2025-11-05T06:25:52Z reason:Unhealthy]}" time="2025-11-05T06:25:52Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:25:52Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:0a25ac891c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Liveness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:25:42Z lastTimestamp:2025-11-05T06:25:52Z reason:ProbeError]}" time="2025-11-05T06:25:52Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:bf86b4c932 namespace:openshift-operator-lifecycle-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:package-server-manager-6cfb5fcd44-s6665]}" message="{ProbeError Readiness probe error: Get \"http://10.130.2.48:8080/healthz\": dial tcp 10.130.2.48:8080: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:25:52Z lastTimestamp:2025-11-05T06:25:52Z reason:ProbeError]}" time="2025-11-05T06:25:52Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a75ee11441 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Liveness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:25:42Z lastTimestamp:2025-11-05T06:25:52Z reason:Unhealthy]}" time="2025-11-05T06:25:52Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:edf74f435c namespace:openshift-operator-lifecycle-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:package-server-manager-6cfb5fcd44-s6665]}" message="{Unhealthy Readiness probe failed: Get \"http://10.130.2.48:8080/healthz\": dial tcp 10.130.2.48:8080: connect: connection refused map[firstTimestamp:2025-11-05T06:25:52Z lastTimestamp:2025-11-05T06:25:52Z reason:Unhealthy]}" time="2025-11-05T06:25:52Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:25:52Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:07f2997d83 namespace:openshift-operator-controller node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:operator-controller-controller-manager-77d5cd444c-twc2v]}" message="{ProbeError Liveness probe error: Get \"http://10.130.2.29:8081/healthz\": dial tcp 10.130.2.29:8081: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:25:52Z lastTimestamp:2025-11-05T06:25:52Z reason:ProbeError]}" time="2025-11-05T06:25:52Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:6d02202103 namespace:openshift-operator-controller node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:operator-controller-controller-manager-77d5cd444c-twc2v]}" message="{Unhealthy Liveness probe failed: Get \"http://10.130.2.29:8081/healthz\": dial tcp 10.130.2.29:8081: connect: connection refused map[firstTimestamp:2025-11-05T06:25:52Z lastTimestamp:2025-11-05T06:25:52Z reason:Unhealthy]}" time="2025-11-05T06:25:52Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:849b4fd7ba namespace:openshift-operator-controller node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:operator-controller-controller-manager-77d5cd444c-twc2v]}" message="{ProbeError Readiness probe error: Get \"http://10.130.2.29:8081/readyz\": dial tcp 10.130.2.29:8081: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:25:52Z lastTimestamp:2025-11-05T06:25:52Z reason:ProbeError]}" time="2025-11-05T06:25:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:03ca724de8 namespace:openshift-operator-controller node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:operator-controller-controller-manager-77d5cd444c-twc2v]}" message="{Unhealthy Readiness probe failed: Get \"http://10.130.2.29:8081/readyz\": dial tcp 10.130.2.29:8081: connect: connection refused map[firstTimestamp:2025-11-05T06:25:52Z lastTimestamp:2025-11-05T06:25:52Z reason:Unhealthy]}" time="2025-11-05T06:25:53Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:25 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T06:25:53Z reason:ProbeError]}" time="2025-11-05T06:25:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:25 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T06:25:53Z reason:Unhealthy]}" time="2025-11-05T06:25:53Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:25:43Z lastTimestamp:2025-11-05T06:25:53Z reason:ProbeError]}" time="2025-11-05T06:25:53Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:25:43Z lastTimestamp:2025-11-05T06:25:53Z reason:Unhealthy]}" time="2025-11-05T06:25:53Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:25:54Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:49078f4b39 namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:9 firstTimestamp:2025-11-05T06:25:30Z lastTimestamp:2025-11-05T06:25:54Z reason:ProbeError]}" time="2025-11-05T06:25:54Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:576a6317bf namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused map[count:9 firstTimestamp:2025-11-05T06:25:30Z lastTimestamp:2025-11-05T06:25:54Z reason:Unhealthy]}" time="2025-11-05T06:25:54Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:f1855f9cad namespace:openshift-route-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:route-controller-manager-595bb8d55f-7rjrv]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.24:8443/healthz\": context deadline exceeded map[firstTimestamp:2025-11-05T06:25:54Z lastTimestamp:2025-11-05T06:25:54Z reason:Unhealthy]}" time="2025-11-05T06:25:54Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:25:54Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:25:54Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:25:54Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" E1105 06:25:55.406228 1669 pod_ip_controller.go:75] "Unhandled Error" err=< invalid queue key '{etcd-backup-ns/post-backup-deployment-7dd7659779-hmfgk &Pod{ObjectMeta:{post-backup-deployment-7dd7659779-hmfgk post-backup-deployment-7dd7659779- etcd-backup-ns 615b1405-d59b-495d-ac47-c65f267a08fe 95855 1 2025-11-05 06:22:40 +0000 UTC map[app:backup-deployment pod-template-hash:7dd7659779] map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.128.2.20/23"],"mac_address":"0a:58:0a:80:02:14","gateway_ips":["10.128.2.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.128.2.1"},{"dest":"172.30.0.0/16","nextHop":"10.128.2.1"},{"dest":"169.254.0.5/32","nextHop":"10.128.2.1"},{"dest":"100.64.0.0/16","nextHop":"10.128.2.1"}],"ip_address":"10.128.2.20/23","gateway_ip":"10.128.2.1","role":"primary"}} k8s.v1.cni.cncf.io/network-status:[{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.128.2.20" ], "mac": "0a:58:0a:80:02:14", "default": true, "dns": {} }] openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default security.openshift.io/validated-scc-subject-type:user] [{apps/v1 ReplicaSet post-backup-deployment-7dd7659779 b22df020-bcc5-47c4-8666-2c92ad8fb900 0xc00e122527 0xc00e122528}] [] [{ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt Update v1 2025-11-05 06:22:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.ovn.org/pod-networks":{}}}} status} {kube-controller-manager Update v1 2025-11-05 06:22:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b22df020-bcc5-47c4-8666-2c92ad8fb900\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"post-backup-sleep-container\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus-daemon Update v1 2025-11-05 06:22:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2025-11-05 06:22:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodReadyToStartContainers\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:hostIPs":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.128.2.20\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6j5gh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},},Containers:[]Container{Container{Name:post-backup-sleep-container,Image:image-registry.openshift-image-registry.svc:5000/openshift/tools:latest,Command:[sleep infinity],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6j5gh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000940000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c31,c5,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:*1000940000,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,SupplementalGroupsPolicy:nil,SELinuxChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-2l7n9,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},Resources:nil,HostnameOverride:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:PodReadyToStartContainers,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 06:22:42 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 06:22:40 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 06:22:42 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 06:22:42 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 06:22:40 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},},Message:,Reason:,HostIP:10.0.128.4,PodIP:10.128.2.20,StartTime:2025-11-05 06:22:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:post-backup-sleep-container,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2025-11-05 06:22:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:image-registry.openshift-image-registry.svc:5000/openshift/tools:latest,ImageID:image-registry.openshift-image-registry.svc:5000/openshift/tools@sha256:c615b4eaf7d3a7d7517b7761f6a9a8f6f6b1db68f772580bcfe827060fd3b231,ContainerID:cri-o://0047319e1bf7a28d789691e01845c8c931e9d5264c46836f7c5f97f3b93c073b,Started:*true,AllocatedResources:ResourceList{},Resources:&ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:kube-api-access-6j5gh,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:&ContainerUser{Linux:&LinuxContainerUser{UID:1000940000,GID:0,SupplementalGroups:[0 1000940000],},},AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.2.20,},},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,ResourceClaimStatuses:[]PodResourceClaimStatus{},HostIPs:[]HostIP{HostIP{IP:10.0.128.4,},},ObservedGeneration:1,ExtendedResourceClaimStatus:nil,},}}': object has no meta: object does not implement the Object interfaces > E1105 06:25:55.406867 1669 pod_ip_controller.go:75] "Unhandled Error" err=< invalid queue key '{openshift-etcd/bumping-etcd-restore-pod &Pod{ObjectMeta:{bumping-etcd-restore-pod openshift-etcd 9602e9ed-bb76-44a9-9de0-8a4121782185 95853 1 2025-11-05 06:22:40 +0000 UTC map[] map[] [] [] [{bumping-etcd-restore-pod Apply v1 2025-11-05 06:22:40 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"cluster-restore\"}":{".":{},"f:command":{},"f:image":{},"f:name":{},"f:securityContext":{"f:privileged":{}},"f:volumeMounts":{"k:{\"mountPath\":\"/tmp/ssh\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:hostNetwork":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:tolerations":{},"f:volumes":{"k:{\"name\":\"keys\"}":{".":{},"f:name":{},"f:secret":{"f:secretName":{}}}}}} } {kubelet Update v1 2025-11-05 06:22:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodReadyToStartContainers\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodScheduled\"}":{"f:observedGeneration":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:observedGeneration":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:hostIPs":{},"f:observedGeneration":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.0.0.7\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:keys,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:dr-ssh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},Volume{Name:kube-api-access-rctqf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},},Containers:[]Container{Container{Name:cluster-restore,Image:image-registry.openshift-image-registry.svc:5000/openshift/tools:latest,Command:[/bin/bash -c #!/bin/bash set -exuo pipefail # ssh key dance CORE_SSH_BASE_DIR=$HOME/.ssh SSH_MOUNT_DIR=/tmp/ssh P_KEY=$SSH_MOUNT_DIR/privKey # we can't change the permissions on the secret mount, thus we copy it to HOME mkdir -p $CORE_SSH_BASE_DIR && chmod 700 $CORE_SSH_BASE_DIR cp $P_KEY $CORE_SSH_BASE_DIR/id_rsa P_KEY=$CORE_SSH_BASE_DIR/id_rsa chmod 600 $P_KEY NODE_IPS=( 10.0.0.5 10.0.0.8 ) for i in "${NODE_IPS[@]}"; do echo "removing etcd static pod on [$i]" ssh -i $P_KEY -o StrictHostKeyChecking=no -q core@${i} sudo rm -rf /etc/kubernetes/manifests/etcd-pod.yaml echo "remove data dir on [$i]" ssh -i $P_KEY -o StrictHostKeyChecking=no -q core@${i} sudo rm -rf /var/lib/etcd done TARGET_NODE_NAME=10.0.0.7 ssh -i $P_KEY -o StrictHostKeyChecking=no -q core@${TARGET_NODE_NAME} < E1105 06:25:55.407752 1669 pod_ip_controller.go:75] "Unhandled Error" err=< invalid queue key '{etcd-backup-ns/post-backup-deployment-7dd7659779-f65hh &Pod{ObjectMeta:{post-backup-deployment-7dd7659779-f65hh post-backup-deployment-7dd7659779- etcd-backup-ns c4d8bda9-ca46-484b-acc5-98a1776f9141 95858 1 2025-11-05 06:22:40 +0000 UTC map[app:backup-deployment pod-template-hash:7dd7659779] map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.131.0.22/23"],"mac_address":"0a:58:0a:83:00:16","gateway_ips":["10.131.0.1"],"routes":[{"dest":"10.128.0.0/14","nextHop":"10.131.0.1"},{"dest":"172.30.0.0/16","nextHop":"10.131.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.131.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.131.0.1"}],"ip_address":"10.131.0.22/23","gateway_ip":"10.131.0.1","role":"primary"}} k8s.v1.cni.cncf.io/network-status:[{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.131.0.22" ], "mac": "0a:58:0a:83:00:16", "default": true, "dns": {} }] openshift.io/scc:restricted-v2 seccomp.security.alpha.kubernetes.io/pod:runtime/default security.openshift.io/validated-scc-subject-type:user] [{apps/v1 ReplicaSet post-backup-deployment-7dd7659779 b22df020-bcc5-47c4-8666-2c92ad8fb900 0xc00e860167 0xc00e860168}] [] [{ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 Update v1 2025-11-05 06:22:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.ovn.org/pod-networks":{}}}} status} {kube-controller-manager Update v1 2025-11-05 06:22:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b22df020-bcc5-47c4-8666-2c92ad8fb900\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"post-backup-sleep-container\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus-daemon Update v1 2025-11-05 06:22:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{}}}} status} {kubelet Update v1 2025-11-05 06:22:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PodReadyToStartContainers\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:hostIPs":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.131.0.22\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pbrrq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:openshift-service-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:service-ca.crt,Path:service-ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,ClusterTrustBundle:nil,PodCertificate:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,Image:nil,},},},Containers:[]Container{Container{Name:post-backup-sleep-container,Image:image-registry.openshift-image-registry.svc:5000/openshift/tools:latest,Command:[sleep infinity],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pbrrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000940000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,RestartPolicyRules:[]ContainerRestartRule{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c31,c5,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:*1000940000,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,SupplementalGroupsPolicy:nil,SELinuxChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{LocalObjectReference{Name:default-dockercfg-2l7n9,},},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},Resources:nil,HostnameOverride:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:PodReadyToStartContainers,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 06:22:42 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 06:22:40 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 06:22:42 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 06:22:42 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2025-11-05 06:22:40 +0000 UTC,Reason:,Message:,ObservedGeneration:1,},},Message:,Reason:,HostIP:10.0.128.3,PodIP:10.131.0.22,StartTime:2025-11-05 06:22:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:post-backup-sleep-container,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2025-11-05 06:22:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:image-registry.openshift-image-registry.svc:5000/openshift/tools:latest,ImageID:image-registry.openshift-image-registry.svc:5000/openshift/tools@sha256:c615b4eaf7d3a7d7517b7761f6a9a8f6f6b1db68f772580bcfe827060fd3b231,ContainerID:cri-o://6f869f2e6e499d41de22c4b89619575b8d7c02856b5c06ff02a5ddc9d3329aef,Started:*true,AllocatedResources:ResourceList{},Resources:&ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMountStatus{VolumeMountStatus{Name:kube-api-access-pbrrq,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,ReadOnly:true,RecursiveReadOnly:*Disabled,},},User:&ContainerUser{Linux:&LinuxContainerUser{UID:1000940000,GID:0,SupplementalGroups:[0 1000940000],},},AllocatedResourcesStatus:[]ResourceStatus{},StopSignal:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.131.0.22,},},EphemeralContainerStatuses:[]ContainerStatus{},Resize:,ResourceClaimStatuses:[]PodResourceClaimStatus{},HostIPs:[]HostIP{HostIP{IP:10.0.128.3,},},ObservedGeneration:1,ExtendedResourceClaimStatus:nil,},}}': object has no meta: object does not implement the Object interfaces > time="2025-11-05T06:25:55Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:25:55Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:25:56Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:0fe27f95b2 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T06:25:16Z lastTimestamp:2025-11-05T06:25:56Z reason:ProbeError]}" time="2025-11-05T06:25:56Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:649a9ff6eb namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused map[count:5 firstTimestamp:2025-11-05T06:25:16Z lastTimestamp:2025-11-05T06:25:56Z reason:Unhealthy]}" time="2025-11-05T06:25:56Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:25:56Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:25:57Z" level=info msg="event interval matches ProbeErrorConnectionRefused" locator="{Kind map[hmsg:49078f4b39 namespace:openshift-config-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-config-operator-69bc6697c9-2bmrs]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.34:8443/healthz\": dial tcp 10.131.2.34:8443: connect: connection refused\nbody: \n map[count:10 firstTimestamp:2025-11-05T06:25:30Z lastTimestamp:2025-11-05T06:25:57Z reason:ProbeError]}" time="2025-11-05T06:25:57Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:25:57Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:25:58Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:26 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T06:25:58Z reason:ProbeError]}" time="2025-11-05T06:25:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:26 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T06:25:58Z reason:Unhealthy]}" time="2025-11-05T06:25:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:bfb625e3fa namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{ProbeError Readiness probe error: Get \"http://10.129.0.14:8081/readyz\": dial tcp 10.129.0.14:8081: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:25:48Z lastTimestamp:2025-11-05T06:25:58Z reason:ProbeError]}" time="2025-11-05T06:25:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a239981af6 namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{Unhealthy Readiness probe failed: Get \"http://10.129.0.14:8081/readyz\": dial tcp 10.129.0.14:8081: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:25:48Z lastTimestamp:2025-11-05T06:25:58Z reason:Unhealthy]}" time="2025-11-05T06:25:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:34055d48be namespace:openshift-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:controller-manager-6848447799-p7xgz]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.14:8443/healthz\": dial tcp 10.131.2.14:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:58Z reason:ProbeError]}" time="2025-11-05T06:25:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:fb4b81ceae namespace:openshift-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:controller-manager-6848447799-p7xgz]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.14:8443/healthz\": dial tcp 10.131.2.14:8443: connect: connection refused map[count:3 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:58Z reason:Unhealthy]}" time="2025-11-05T06:25:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:929b820f4f namespace:openshift-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:controller-manager-6848447799-p7xgz]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.14:8443/healthz\": dial tcp 10.131.2.14:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:58Z reason:ProbeError]}" time="2025-11-05T06:25:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:df653d738a namespace:openshift-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:controller-manager-6848447799-p7xgz]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.14:8443/healthz\": dial tcp 10.131.2.14:8443: connect: connection refused map[count:3 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:58Z reason:Unhealthy]}" time="2025-11-05T06:25:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:247a206f9e namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:58Z reason:ProbeError]}" time="2025-11-05T06:25:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a6aa2ad388 namespace:openshift-authentication-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:authentication-operator-7898ff465d-29vtv]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.30:8443/healthz\": dial tcp 10.131.2.30:8443: connect: connection refused map[count:3 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:58Z reason:Unhealthy]}" time="2025-11-05T06:25:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:11e3e3da27 namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:marketplace-operator-65754d8564-dptvk]}" message="{ProbeError Readiness probe error: Get \"http://10.131.2.32:8080/healthz\": dial tcp 10.131.2.32:8080: connect: connection refused\nbody: \n map[count:7 firstTimestamp:2025-11-05T06:24:58Z lastTimestamp:2025-11-05T06:25:58Z reason:ProbeError]}" time="2025-11-05T06:25:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:44ca77887c namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:marketplace-operator-65754d8564-dptvk]}" message="{Unhealthy Readiness probe failed: Get \"http://10.131.2.32:8080/healthz\": dial tcp 10.131.2.32:8080: connect: connection refused map[count:7 firstTimestamp:2025-11-05T06:24:58Z lastTimestamp:2025-11-05T06:25:58Z reason:Unhealthy]}" time="2025-11-05T06:25:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:6c34c36370 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Liveness probe error: Get \"https://10.131.2.49:8443/healthz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:58Z reason:ProbeError]}" time="2025-11-05T06:25:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:cadbff4a67 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Liveness probe failed: Get \"https://10.131.2.49:8443/healthz\": dial tcp 10.131.2.49:8443: connect: connection refused map[count:3 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:58Z reason:Unhealthy]}" time="2025-11-05T06:25:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2976e363a4 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:58Z reason:ProbeError]}" time="2025-11-05T06:25:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:e6879c25f1 namespace:openshift-console-operator node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:console-operator-589679b99d-hksh7]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.49:8443/readyz\": dial tcp 10.131.2.49:8443: connect: connection refused map[count:3 firstTimestamp:2025-11-05T06:25:38Z lastTimestamp:2025-11-05T06:25:58Z reason:Unhealthy]}" time="2025-11-05T06:25:58Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:25:58Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:25:59Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:25:59Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" I1105 06:26:00.117909 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:26:00Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:00Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:01Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:01Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:02Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:02Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:0a25ac891c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Liveness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T06:25:42Z lastTimestamp:2025-11-05T06:26:02Z reason:ProbeError]}" time="2025-11-05T06:26:02Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a75ee11441 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Liveness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:3 firstTimestamp:2025-11-05T06:25:42Z lastTimestamp:2025-11-05T06:26:02Z reason:Unhealthy]}" time="2025-11-05T06:26:02Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:03Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T06:25:43Z lastTimestamp:2025-11-05T06:26:03Z reason:ProbeError]}" time="2025-11-05T06:26:03Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:3 firstTimestamp:2025-11-05T06:25:43Z lastTimestamp:2025-11-05T06:26:03Z reason:Unhealthy]}" time="2025-11-05T06:26:03Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:03Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:04Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:04Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:05Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:05Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:06Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:06Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:07Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:07Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:08Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:bfb625e3fa namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{ProbeError Readiness probe error: Get \"http://10.129.0.14:8081/readyz\": dial tcp 10.129.0.14:8081: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T06:25:48Z lastTimestamp:2025-11-05T06:26:08Z reason:ProbeError]}" time="2025-11-05T06:26:08Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a239981af6 namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{Unhealthy Readiness probe failed: Get \"http://10.129.0.14:8081/readyz\": dial tcp 10.129.0.14:8081: connect: connection refused map[count:3 firstTimestamp:2025-11-05T06:25:48Z lastTimestamp:2025-11-05T06:26:08Z reason:Unhealthy]}" time="2025-11-05T06:26:08Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:08Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:09Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:09Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:10Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:10Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:11Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a3b940479c namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{ProbeError Liveness probe error: Get \"http://10.129.0.14:8081/healthz\": dial tcp 10.129.0.14:8081: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:25:51Z lastTimestamp:2025-11-05T06:26:11Z reason:ProbeError]}" time="2025-11-05T06:26:11Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:20c0c4e5c5 namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{Unhealthy Liveness probe failed: Get \"http://10.129.0.14:8081/healthz\": dial tcp 10.129.0.14:8081: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:25:51Z lastTimestamp:2025-11-05T06:26:11Z reason:Unhealthy]}" time="2025-11-05T06:26:11Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:11Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:12Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:12Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:13Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T06:25:43Z lastTimestamp:2025-11-05T06:26:13Z reason:ProbeError]}" time="2025-11-05T06:26:13Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:4 firstTimestamp:2025-11-05T06:25:43Z lastTimestamp:2025-11-05T06:26:13Z reason:Unhealthy]}" time="2025-11-05T06:26:13Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:13Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:14Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:14Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:15Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:15Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:16Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:16Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:17Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:17Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:18Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:bfb625e3fa namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{ProbeError Readiness probe error: Get \"http://10.129.0.14:8081/readyz\": dial tcp 10.129.0.14:8081: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T06:25:48Z lastTimestamp:2025-11-05T06:26:18Z reason:ProbeError]}" time="2025-11-05T06:26:18Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:a239981af6 namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{Unhealthy Readiness probe failed: Get \"http://10.129.0.14:8081/readyz\": dial tcp 10.129.0.14:8081: connect: connection refused map[count:4 firstTimestamp:2025-11-05T06:25:48Z lastTimestamp:2025-11-05T06:26:18Z reason:Unhealthy]}" time="2025-11-05T06:26:18Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:18Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:19Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:19Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:20Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:20Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:21Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:21Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:22Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:22Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:23Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T06:25:43Z lastTimestamp:2025-11-05T06:26:23Z reason:ProbeError]}" time="2025-11-05T06:26:23Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:5 firstTimestamp:2025-11-05T06:25:43Z lastTimestamp:2025-11-05T06:26:23Z reason:Unhealthy]}" time="2025-11-05T06:26:23Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:23Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:24Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:24Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:25Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:25Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:26Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:26Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:27Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:27Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:28Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:bfb625e3fa namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{ProbeError Readiness probe error: Get \"http://10.129.0.14:8081/readyz\": dial tcp 10.129.0.14:8081: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T06:25:48Z lastTimestamp:2025-11-05T06:26:28Z reason:ProbeError]}" time="2025-11-05T06:26:28Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a239981af6 namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{Unhealthy Readiness probe failed: Get \"http://10.129.0.14:8081/readyz\": dial tcp 10.129.0.14:8081: connect: connection refused map[count:5 firstTimestamp:2025-11-05T06:25:48Z lastTimestamp:2025-11-05T06:26:28Z reason:Unhealthy]}" time="2025-11-05T06:26:28Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:28Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:29Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:29Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:30Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:30Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:31Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a3b940479c namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{ProbeError Liveness probe error: Get \"http://10.129.0.14:8081/healthz\": dial tcp 10.129.0.14:8081: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T06:25:51Z lastTimestamp:2025-11-05T06:26:31Z reason:ProbeError]}" time="2025-11-05T06:26:31Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:20c0c4e5c5 namespace:openshift-catalogd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:catalogd-controller-manager-66bcb68989-t6zbh]}" message="{Unhealthy Liveness probe failed: Get \"http://10.129.0.14:8081/healthz\": dial tcp 10.129.0.14:8081: connect: connection refused map[count:3 firstTimestamp:2025-11-05T06:25:51Z lastTimestamp:2025-11-05T06:26:31Z reason:Unhealthy]}" time="2025-11-05T06:26:31Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:31Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:32Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:32Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:33Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T06:25:43Z lastTimestamp:2025-11-05T06:26:33Z reason:ProbeError]}" time="2025-11-05T06:26:33Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:6 firstTimestamp:2025-11-05T06:25:43Z lastTimestamp:2025-11-05T06:26:33Z reason:Unhealthy]}" time="2025-11-05T06:26:33Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:33Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:34Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:34Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:35Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:35Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:36Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:36Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:37Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:37Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:38Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:38Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:39Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:39Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:40Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:40Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:41Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:41Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:42Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:42Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:43Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:43Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:44Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:44Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:45Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:45Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:46Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:46Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:47Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:47Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:48Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:48Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:49Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:49Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:50Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:50Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:51Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:51Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:52Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:52Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:53Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:53Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:54Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:54Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:55Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:55Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:56Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:56Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:57Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:57Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:58Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:58Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:26:59Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:26:59Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" I1105 06:27:00.285994 1669 client.go:1078] Error running oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all: StdOut> Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get machineconfigpools.machineconfiguration.openshift.io) StdErr> Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get machineconfigpools.machineconfiguration.openshift.io) I1105 06:27:00.286215 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:27:00Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:00Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:01Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:01Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:02Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:02Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:03Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:03Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:04Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:04Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:05Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:05Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:06Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:06Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:07Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:07Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:08Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:08Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:09Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:09Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:10Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:10Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:11Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:11Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:12Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:12Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:13Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:13Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:14Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:14Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:15Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:15Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:16Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:16Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:17Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:17Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:27:18Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/2aa4ad3c-6244-4641-8900-d08c35c8b674 container/etcd mirror-uid/8ad84aac27634b4c224de5f5fd4d9273" time="2025-11-05T06:27:18Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/bbf9c64b-7d65-4d98-bd2b-8155626542c6 container/etcd mirror-uid/e2c2a2e2331afdd13db01ce19924af02" time="2025-11-05T06:28:03Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[count:208 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T06:28:03Z reason:ProbeError]}" I1105 06:28:11.507440 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:28:52Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:f98b6f42c2 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused\nbody: \n map[count:42 firstTimestamp:2025-11-05T04:22:27Z lastTimestamp:2025-11-05T06:28:52Z reason:ProbeError]}" time="2025-11-05T06:28:53Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:84 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T06:28:53Z reason:ProbeError]}" time="2025-11-05T06:29:11Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:29:11Z reason:ConfigMissing]}" time="2025-11-05T06:29:11Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:2 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:29:11Z reason:ConfigMissing]}" time="2025-11-05T06:29:11Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:3 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:29:11Z reason:ConfigMissing]}" time="2025-11-05T06:29:11Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:4 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:29:11Z reason:ConfigMissing]}" time="2025-11-05T06:29:11Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:5 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:29:11Z reason:ConfigMissing]}" time="2025-11-05T06:29:11Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:6 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:29:11Z reason:ConfigMissing]}" I1105 06:29:11.739578 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:29:11Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:7 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:29:11Z reason:ConfigMissing]}" time="2025-11-05T06:29:12Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:8 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:29:12Z reason:ConfigMissing]}" time="2025-11-05T06:29:12Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:9 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:29:12Z reason:ConfigMissing]}" time="2025-11-05T06:29:14Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:10 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:29:14Z reason:ConfigMissing]}" time="2025-11-05T06:29:15Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:61 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T06:29:15Z reason:ProbeError]}" time="2025-11-05T06:29:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:140 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T06:29:15Z reason:Unhealthy]}" time="2025-11-05T06:29:16Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:11 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:29:16Z reason:ConfigMissing]}" time="2025-11-05T06:29:17Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:f305fcc059 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Startup probe error: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:29:17Z lastTimestamp:2025-11-05T06:29:17Z reason:ProbeError]}" time="2025-11-05T06:29:17Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1028212dbd namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Startup probe failed: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused map[firstTimestamp:2025-11-05T06:29:17Z lastTimestamp:2025-11-05T06:29:17Z reason:Unhealthy]}" time="2025-11-05T06:29:20Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:62 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T06:29:20Z reason:ProbeError]}" time="2025-11-05T06:29:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:141 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T06:29:20Z reason:Unhealthy]}" time="2025-11-05T06:29:21Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:12 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:29:21Z reason:ConfigMissing]}" time="2025-11-05T06:29:22Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:13 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:29:22Z reason:ConfigMissing]}" time="2025-11-05T06:29:25Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:63 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T06:29:25Z reason:ProbeError]}" time="2025-11-05T06:29:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:142 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T06:29:25Z reason:Unhealthy]}" time="2025-11-05T06:29:25Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:64 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T06:29:25Z reason:ProbeError]}" time="2025-11-05T06:29:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:143 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T06:29:25Z reason:Unhealthy]}" time="2025-11-05T06:29:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:f305fcc059 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Startup probe error: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:29:17Z lastTimestamp:2025-11-05T06:29:27Z reason:ProbeError]}" time="2025-11-05T06:29:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1028212dbd namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Startup probe failed: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:29:17Z lastTimestamp:2025-11-05T06:29:27Z reason:Unhealthy]}" time="2025-11-05T06:29:30Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:65 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T06:29:30Z reason:ProbeError]}" time="2025-11-05T06:29:31Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-7c4bd569d6-2hmmb]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:29:32Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:14 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:29:32Z reason:ConfigMissing]}" time="2025-11-05T06:29:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7cf6d99599-g2g5q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T06:29:32Z lastTimestamp:2025-11-05T06:29:32Z reason:Unhealthy]}" time="2025-11-05T06:29:37Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7cf6d99599-g2g5q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T06:29:32Z lastTimestamp:2025-11-05T06:29:37Z reason:Unhealthy]}" time="2025-11-05T06:29:42Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7cf6d99599-g2g5q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T06:29:32Z lastTimestamp:2025-11-05T06:29:42Z reason:Unhealthy]}" time="2025-11-05T06:29:45Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:68 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T06:29:45Z reason:ProbeError]}" time="2025-11-05T06:29:47Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7cf6d99599-g2g5q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T06:29:32Z lastTimestamp:2025-11-05T06:29:47Z reason:Unhealthy]}" time="2025-11-05T06:29:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7cf6d99599-g2g5q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T06:29:32Z lastTimestamp:2025-11-05T06:29:52Z reason:Unhealthy]}" time="2025-11-05T06:29:53Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-84cdcc6795-zg568]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:29:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7cf6d99599-g2g5q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T06:29:32Z lastTimestamp:2025-11-05T06:29:57Z reason:Unhealthy]}" time="2025-11-05T06:29:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-k7pz2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T06:29:57Z lastTimestamp:2025-11-05T06:29:57Z reason:Unhealthy]}" time="2025-11-05T06:30:02Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7cf6d99599-g2g5q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T06:29:32Z lastTimestamp:2025-11-05T06:30:02Z reason:Unhealthy]}" time="2025-11-05T06:30:02Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-k7pz2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T06:29:57Z lastTimestamp:2025-11-05T06:30:02Z reason:Unhealthy]}" time="2025-11-05T06:30:07Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7cf6d99599-g2g5q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T06:29:32Z lastTimestamp:2025-11-05T06:30:07Z reason:Unhealthy]}" time="2025-11-05T06:30:07Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-k7pz2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T06:29:57Z lastTimestamp:2025-11-05T06:30:07Z reason:Unhealthy]}" I1105 06:30:11.970786 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:30:12Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7cf6d99599-g2g5q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T06:29:32Z lastTimestamp:2025-11-05T06:30:12Z reason:Unhealthy]}" time="2025-11-05T06:30:12Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-k7pz2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T06:29:57Z lastTimestamp:2025-11-05T06:30:12Z reason:Unhealthy]}" time="2025-11-05T06:30:13Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:15 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:30:12Z reason:ConfigMissing]}" time="2025-11-05T06:30:17Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7cf6d99599-g2g5q]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T06:29:32Z lastTimestamp:2025-11-05T06:30:17Z reason:Unhealthy]}" time="2025-11-05T06:30:17Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-k7pz2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T06:29:57Z lastTimestamp:2025-11-05T06:30:17Z reason:Unhealthy]}" time="2025-11-05T06:30:22Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:16 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:30:22Z reason:ConfigMissing]}" time="2025-11-05T06:30:22Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:22Z reason:ProbeError]}" time="2025-11-05T06:30:22Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:22Z reason:Unhealthy]}" time="2025-11-05T06:30:22Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:b30be0e8b2 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7cf6d99599-g2g5q]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.99:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.99:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:22Z reason:ProbeError]}" time="2025-11-05T06:30:22Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:c1fee1e1d2 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7cf6d99599-g2g5q]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.99:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.99:8443: connect: connection refused map[firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:22Z reason:Unhealthy]}" time="2025-11-05T06:30:22Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-k7pz2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T06:29:57Z lastTimestamp:2025-11-05T06:30:22Z reason:Unhealthy]}" time="2025-11-05T06:30:27Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:27Z reason:ProbeError]}" time="2025-11-05T06:30:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:27Z reason:Unhealthy]}" time="2025-11-05T06:30:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:b30be0e8b2 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7cf6d99599-g2g5q]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.99:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.99:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:27Z reason:ProbeError]}" time="2025-11-05T06:30:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:c1fee1e1d2 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7cf6d99599-g2g5q]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.99:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.99:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:27Z reason:Unhealthy]}" time="2025-11-05T06:30:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:27Z reason:ProbeError]}" time="2025-11-05T06:30:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:27Z reason:Unhealthy]}" time="2025-11-05T06:30:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-k7pz2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T06:29:57Z lastTimestamp:2025-11-05T06:30:27Z reason:Unhealthy]}" time="2025-11-05T06:30:28Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-7c4bd569d6-2hmmb]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:30:28Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-84cdcc6795-zg568]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:30:32Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:32Z reason:ProbeError]}" time="2025-11-05T06:30:32Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:3 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:32Z reason:Unhealthy]}" time="2025-11-05T06:30:32Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:32Z reason:ProbeError]}" time="2025-11-05T06:30:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:4 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:32Z reason:Unhealthy]}" time="2025-11-05T06:30:32Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:32Z reason:ProbeError]}" time="2025-11-05T06:30:32Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:32Z reason:Unhealthy]}" time="2025-11-05T06:30:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-k7pz2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T06:29:57Z lastTimestamp:2025-11-05T06:30:32Z reason:Unhealthy]}" time="2025-11-05T06:30:37Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/1a97740c-5b19-4684-89d5-fd2cc2cfb98e container/etcd mirror-uid/1fe98e6d910bffc16bfc1517c2f4fe16" time="2025-11-05T06:30:37Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:37Z reason:ProbeError]}" time="2025-11-05T06:30:37Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:5 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:37Z reason:Unhealthy]}" time="2025-11-05T06:30:37Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:37Z reason:ProbeError]}" time="2025-11-05T06:30:37Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:3 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:37Z reason:Unhealthy]}" time="2025-11-05T06:30:37Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:37Z reason:ProbeError]}" time="2025-11-05T06:30:37Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:4 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:37Z reason:Unhealthy]}" time="2025-11-05T06:30:37Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-k7pz2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T06:29:57Z lastTimestamp:2025-11-05T06:30:37Z reason:Unhealthy]}" time="2025-11-05T06:30:38Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/1a97740c-5b19-4684-89d5-fd2cc2cfb98e container/etcd mirror-uid/1fe98e6d910bffc16bfc1517c2f4fe16" time="2025-11-05T06:30:39Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/1a97740c-5b19-4684-89d5-fd2cc2cfb98e container/etcd mirror-uid/1fe98e6d910bffc16bfc1517c2f4fe16" time="2025-11-05T06:30:40Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/1a97740c-5b19-4684-89d5-fd2cc2cfb98e container/etcd mirror-uid/1fe98e6d910bffc16bfc1517c2f4fe16" time="2025-11-05T06:30:41Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/1a97740c-5b19-4684-89d5-fd2cc2cfb98e container/etcd mirror-uid/1fe98e6d910bffc16bfc1517c2f4fe16" time="2025-11-05T06:30:42Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/1a97740c-5b19-4684-89d5-fd2cc2cfb98e container/etcd mirror-uid/1fe98e6d910bffc16bfc1517c2f4fe16" time="2025-11-05T06:30:42Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:42Z reason:ProbeError]}" time="2025-11-05T06:30:42Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:6 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:42Z reason:Unhealthy]}" time="2025-11-05T06:30:42Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:b30be0e8b2 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7cf6d99599-g2g5q]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.99:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.99:8443: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:42Z reason:ProbeError]}" time="2025-11-05T06:30:42Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:42Z reason:ProbeError]}" time="2025-11-05T06:30:42Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:5 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:42Z reason:Unhealthy]}" time="2025-11-05T06:30:42Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-k7pz2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T06:29:57Z lastTimestamp:2025-11-05T06:30:42Z reason:Unhealthy]}" time="2025-11-05T06:30:43Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:9099a66234 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2_openshift-etcd(1fe98e6d910bffc16bfc1517c2f4fe16) map[firstTimestamp:2025-11-05T06:30:43Z lastTimestamp:2025-11-05T06:30:43Z reason:BackOff]}" time="2025-11-05T06:30:44Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:6b30a282ed namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Startup probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:30:44Z lastTimestamp:2025-11-05T06:30:44Z reason:ProbeError]}" time="2025-11-05T06:30:44Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:038a55ce52 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Startup probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[firstTimestamp:2025-11-05T06:30:44Z lastTimestamp:2025-11-05T06:30:44Z reason:Unhealthy]}" time="2025-11-05T06:30:44Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:9099a66234 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2_openshift-etcd(1fe98e6d910bffc16bfc1517c2f4fe16) map[count:2 firstTimestamp:2025-11-05T06:30:43Z lastTimestamp:2025-11-05T06:30:44Z reason:BackOff]}" time="2025-11-05T06:30:45Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:9099a66234 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2_openshift-etcd(1fe98e6d910bffc16bfc1517c2f4fe16) map[count:3 firstTimestamp:2025-11-05T06:30:43Z lastTimestamp:2025-11-05T06:30:45Z reason:BackOff]}" time="2025-11-05T06:30:47Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:9099a66234 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2_openshift-etcd(1fe98e6d910bffc16bfc1517c2f4fe16) map[count:4 firstTimestamp:2025-11-05T06:30:43Z lastTimestamp:2025-11-05T06:30:47Z reason:BackOff]}" time="2025-11-05T06:30:47Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:7 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:47Z reason:ProbeError]}" time="2025-11-05T06:30:47Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:7 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:47Z reason:Unhealthy]}" time="2025-11-05T06:30:47Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:47Z reason:ProbeError]}" time="2025-11-05T06:30:47Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:6 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:47Z reason:Unhealthy]}" time="2025-11-05T06:30:48Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:9099a66234 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2_openshift-etcd(1fe98e6d910bffc16bfc1517c2f4fe16) map[count:5 firstTimestamp:2025-11-05T06:30:43Z lastTimestamp:2025-11-05T06:30:48Z reason:BackOff]}" time="2025-11-05T06:30:51Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-84cdcc6795-svvwb]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:30:52Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:8 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:52Z reason:ProbeError]}" time="2025-11-05T06:30:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:8 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T06:30:52Z reason:Unhealthy]}" time="2025-11-05T06:30:52Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:7 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:52Z reason:ProbeError]}" time="2025-11-05T06:30:52Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:7 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:52Z reason:Unhealthy]}" time="2025-11-05T06:30:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-f8pn7]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T06:30:53Z lastTimestamp:2025-11-05T06:30:53Z reason:Unhealthy]}" time="2025-11-05T06:30:57Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:8 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:57Z reason:ProbeError]}" time="2025-11-05T06:30:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:8 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:30:57Z reason:Unhealthy]}" time="2025-11-05T06:30:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-f8pn7]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T06:30:53Z lastTimestamp:2025-11-05T06:30:58Z reason:Unhealthy]}" time="2025-11-05T06:31:02Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:9 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:31:02Z reason:ProbeError]}" time="2025-11-05T06:31:02Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:9 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:31:02Z reason:Unhealthy]}" time="2025-11-05T06:31:03Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-f8pn7]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T06:30:53Z lastTimestamp:2025-11-05T06:31:03Z reason:Unhealthy]}" time="2025-11-05T06:31:07Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:10 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:31:07Z reason:ProbeError]}" time="2025-11-05T06:31:07Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:10 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:31:07Z reason:Unhealthy]}" time="2025-11-05T06:31:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-f8pn7]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T06:30:53Z lastTimestamp:2025-11-05T06:31:08Z reason:Unhealthy]}" I1105 06:31:12.237995 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:31:12Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:11 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:31:12Z reason:ProbeError]}" time="2025-11-05T06:31:12Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:11 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:31:12Z reason:Unhealthy]}" time="2025-11-05T06:31:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-f8pn7]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T06:30:53Z lastTimestamp:2025-11-05T06:31:13Z reason:Unhealthy]}" time="2025-11-05T06:31:17Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:12 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:31:17Z reason:ProbeError]}" time="2025-11-05T06:31:17Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:12 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:31:17Z reason:Unhealthy]}" time="2025-11-05T06:31:18Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-f8pn7]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T06:30:53Z lastTimestamp:2025-11-05T06:31:18Z reason:Unhealthy]}" time="2025-11-05T06:31:21Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:17 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:31:21Z reason:ConfigMissing]}" time="2025-11-05T06:31:22Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:18 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:31:22Z reason:ConfigMissing]}" time="2025-11-05T06:31:22Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:13 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T06:31:22Z reason:ProbeError]}" time="2025-11-05T06:31:23Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-f8pn7]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T06:30:53Z lastTimestamp:2025-11-05T06:31:23Z reason:Unhealthy]}" time="2025-11-05T06:31:25Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:19 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:31:25Z reason:ConfigMissing]}" time="2025-11-05T06:31:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-f8pn7]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T06:30:53Z lastTimestamp:2025-11-05T06:31:28Z reason:Unhealthy]}" time="2025-11-05T06:31:32Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-7c4bd569d6-4ljhk]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:31:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-f8pn7]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T06:30:53Z lastTimestamp:2025-11-05T06:31:33Z reason:Unhealthy]}" time="2025-11-05T06:31:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-7cf6d99599-zs6lh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T06:31:33Z lastTimestamp:2025-11-05T06:31:33Z reason:Unhealthy]}" time="2025-11-05T06:31:34Z" level=info msg="event interval matches EtcdEndpointsConfigMissingDuringTwoNodeTests" locator="{ map[hmsg:e0fb513ba6 namespace:openshift-kube-apiserver-operator]}" message="{ConfigMissing apiServerArguments.etcd-servers has less than two live etcd endpoints: [https://10.0.0.7:2379] map[count:20 firstTimestamp:2025-11-05T06:29:11Z lastTimestamp:2025-11-05T06:31:34Z reason:ConfigMissing]}" time="2025-11-05T06:31:38Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-f8pn7]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T06:30:53Z lastTimestamp:2025-11-05T06:31:38Z reason:Unhealthy]}" time="2025-11-05T06:31:38Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-7cf6d99599-zs6lh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T06:31:33Z lastTimestamp:2025-11-05T06:31:38Z reason:Unhealthy]}" time="2025-11-05T06:31:38Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-8645679b75-st5gm]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:31:43Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:65cd3c913f namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T06:31:43Z reason:ProbeError]}" time="2025-11-05T06:31:43Z" level=info msg="event interval matches ProbeErrorConnectionRefused" locator="{Kind map[hmsg:0bc456ac9e namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-f8pn7]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.133:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.133:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:31:43Z lastTimestamp:2025-11-05T06:31:43Z reason:ProbeError]}" time="2025-11-05T06:31:43Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:ab2db9859b namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-f8pn7]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.133:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.133:8443: connect: connection refused map[firstTimestamp:2025-11-05T06:31:43Z lastTimestamp:2025-11-05T06:31:43Z reason:Unhealthy]}" time="2025-11-05T06:31:43Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d94f36ceca namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused map[count:5 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T06:31:43Z reason:Unhealthy]}" time="2025-11-05T06:31:43Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-7cf6d99599-zs6lh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T06:31:33Z lastTimestamp:2025-11-05T06:31:43Z reason:Unhealthy]}" time="2025-11-05T06:31:48Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:65cd3c913f namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T06:31:48Z reason:ProbeError]}" time="2025-11-05T06:31:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d94f36ceca namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused map[count:6 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T06:31:48Z reason:Unhealthy]}" time="2025-11-05T06:31:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-7cf6d99599-zs6lh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T06:31:33Z lastTimestamp:2025-11-05T06:31:48Z reason:Unhealthy]}" time="2025-11-05T06:31:50Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-8645679b75-w2ss9]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:31:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-nv95f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T06:31:51Z lastTimestamp:2025-11-05T06:31:51Z reason:Unhealthy]}" time="2025-11-05T06:31:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-7cf6d99599-zs6lh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T06:31:33Z lastTimestamp:2025-11-05T06:31:53Z reason:Unhealthy]}" time="2025-11-05T06:31:56Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-nv95f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T06:31:51Z lastTimestamp:2025-11-05T06:31:56Z reason:Unhealthy]}" time="2025-11-05T06:31:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-7cf6d99599-zs6lh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T06:31:33Z lastTimestamp:2025-11-05T06:31:58Z reason:Unhealthy]}" time="2025-11-05T06:32:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-nv95f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T06:31:51Z lastTimestamp:2025-11-05T06:32:01Z reason:Unhealthy]}" time="2025-11-05T06:32:03Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-7cf6d99599-zs6lh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T06:31:33Z lastTimestamp:2025-11-05T06:32:03Z reason:Unhealthy]}" time="2025-11-05T06:32:06Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-nv95f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T06:31:51Z lastTimestamp:2025-11-05T06:32:06Z reason:Unhealthy]}" time="2025-11-05T06:32:07Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:07Z reason:ProbeError]}" time="2025-11-05T06:32:07Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:07Z reason:Unhealthy]}" time="2025-11-05T06:32:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-7cf6d99599-zs6lh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T06:31:33Z lastTimestamp:2025-11-05T06:32:08Z reason:Unhealthy]}" time="2025-11-05T06:32:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-nv95f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T06:31:51Z lastTimestamp:2025-11-05T06:32:11Z reason:Unhealthy]}" I1105 06:32:12.462187 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:32:12Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:2 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:12Z reason:ProbeError]}" time="2025-11-05T06:32:12Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:12Z reason:Unhealthy]}" time="2025-11-05T06:32:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-7cf6d99599-zs6lh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T06:31:33Z lastTimestamp:2025-11-05T06:32:13Z reason:Unhealthy]}" time="2025-11-05T06:32:16Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-nv95f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T06:31:51Z lastTimestamp:2025-11-05T06:32:16Z reason:Unhealthy]}" time="2025-11-05T06:32:17Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:3 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:17Z reason:ProbeError]}" time="2025-11-05T06:32:17Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:17Z reason:Unhealthy]}" time="2025-11-05T06:32:17Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:4 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:17Z reason:ProbeError]}" time="2025-11-05T06:32:17Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:17Z reason:Unhealthy]}" time="2025-11-05T06:32:18Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-7cf6d99599-zs6lh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T06:31:33Z lastTimestamp:2025-11-05T06:32:18Z reason:Unhealthy]}" time="2025-11-05T06:32:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-nv95f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T06:31:51Z lastTimestamp:2025-11-05T06:32:21Z reason:Unhealthy]}" time="2025-11-05T06:32:22Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:5 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:22Z reason:ProbeError]}" time="2025-11-05T06:32:22Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:22Z reason:Unhealthy]}" time="2025-11-05T06:32:23Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:8edfe3aecc namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-7cf6d99599-zs6lh]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.86:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.86:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:32:23Z lastTimestamp:2025-11-05T06:32:23Z reason:ProbeError]}" time="2025-11-05T06:32:23Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:3cb787870e namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-7cf6d99599-zs6lh]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.86:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.86:8443: connect: connection refused map[firstTimestamp:2025-11-05T06:32:23Z lastTimestamp:2025-11-05T06:32:23Z reason:Unhealthy]}" time="2025-11-05T06:32:26Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-nv95f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T06:31:51Z lastTimestamp:2025-11-05T06:32:26Z reason:Unhealthy]}" time="2025-11-05T06:32:27Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:6 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:27Z reason:ProbeError]}" time="2025-11-05T06:32:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:27Z reason:Unhealthy]}" time="2025-11-05T06:32:28Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:8edfe3aecc namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-7cf6d99599-zs6lh]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.86:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.86:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:32:23Z lastTimestamp:2025-11-05T06:32:28Z reason:ProbeError]}" time="2025-11-05T06:32:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:3cb787870e namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-7cf6d99599-zs6lh]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.86:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.86:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:32:23Z lastTimestamp:2025-11-05T06:32:28Z reason:Unhealthy]}" time="2025-11-05T06:32:31Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-5c6656b6fd-r28cv]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:32:31Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-nv95f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T06:31:51Z lastTimestamp:2025-11-05T06:32:31Z reason:Unhealthy]}" time="2025-11-05T06:32:32Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:7 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:32Z reason:ProbeError]}" time="2025-11-05T06:32:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:32Z reason:Unhealthy]}" time="2025-11-05T06:32:33Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:8edfe3aecc namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-7cf6d99599-zs6lh]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.86:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.86:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T06:32:23Z lastTimestamp:2025-11-05T06:32:33Z reason:ProbeError]}" time="2025-11-05T06:32:36Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-nv95f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T06:31:51Z lastTimestamp:2025-11-05T06:32:36Z reason:Unhealthy]}" time="2025-11-05T06:32:37Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:8 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:37Z reason:ProbeError]}" time="2025-11-05T06:32:37Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:37Z reason:Unhealthy]}" time="2025-11-05T06:32:41Z" level=info msg="event interval matches ProbeErrorConnectionRefused" locator="{Kind map[hmsg:c2d6ad3132 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-nv95f]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.87:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.87:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:32:41Z lastTimestamp:2025-11-05T06:32:41Z reason:ProbeError]}" time="2025-11-05T06:32:41Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:6fa5f306e8 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-nv95f]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.87:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.87:8443: connect: connection refused map[firstTimestamp:2025-11-05T06:32:41Z lastTimestamp:2025-11-05T06:32:41Z reason:Unhealthy]}" time="2025-11-05T06:32:42Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:9 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:42Z reason:ProbeError]}" time="2025-11-05T06:32:42Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:42Z reason:Unhealthy]}" time="2025-11-05T06:32:47Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:10 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:47Z reason:ProbeError]}" time="2025-11-05T06:32:47Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:47Z reason:Unhealthy]}" time="2025-11-05T06:32:49Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-8645679b75-c7mph]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:32:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-84cdcc6795-zg568]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T06:32:51Z lastTimestamp:2025-11-05T06:32:51Z reason:Unhealthy]}" time="2025-11-05T06:32:52Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:11 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:52Z reason:ProbeError]}" time="2025-11-05T06:32:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:11 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:52Z reason:Unhealthy]}" time="2025-11-05T06:32:53Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:32:54Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:32:55Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:32:56Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:32:56Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-84cdcc6795-zg568]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T06:32:51Z lastTimestamp:2025-11-05T06:32:56Z reason:Unhealthy]}" time="2025-11-05T06:32:57Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:32:57Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:12 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:57Z reason:ProbeError]}" time="2025-11-05T06:32:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:12 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:32:57Z reason:Unhealthy]}" time="2025-11-05T06:32:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5d07821b69 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T06:32:58Z reason:ProbeError]}" time="2025-11-05T06:32:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d07f8fa06c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused map[count:6 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T06:32:58Z reason:Unhealthy]}" time="2025-11-05T06:32:59Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:d26fe52dfe namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1_openshift-etcd(bffa02fecf39ef8047c86605497d4590) map[firstTimestamp:2025-11-05T06:32:59Z lastTimestamp:2025-11-05T06:32:59Z reason:BackOff]}" time="2025-11-05T06:33:00Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:d26fe52dfe namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1_openshift-etcd(bffa02fecf39ef8047c86605497d4590) map[count:2 firstTimestamp:2025-11-05T06:32:59Z lastTimestamp:2025-11-05T06:33:00Z reason:BackOff]}" time="2025-11-05T06:33:01Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:d26fe52dfe namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1_openshift-etcd(bffa02fecf39ef8047c86605497d4590) map[count:3 firstTimestamp:2025-11-05T06:32:59Z lastTimestamp:2025-11-05T06:33:01Z reason:BackOff]}" time="2025-11-05T06:33:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-84cdcc6795-zg568]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T06:32:51Z lastTimestamp:2025-11-05T06:33:01Z reason:Unhealthy]}" time="2025-11-05T06:33:02Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:e9f2182537 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:13 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T06:33:02Z reason:ProbeError]}" time="2025-11-05T06:33:03Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5d07821b69 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused\nbody: \n map[count:7 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T06:33:03Z reason:ProbeError]}" time="2025-11-05T06:33:03Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d07f8fa06c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused map[count:7 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T06:33:03Z reason:Unhealthy]}" time="2025-11-05T06:33:03Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:d26fe52dfe namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1_openshift-etcd(bffa02fecf39ef8047c86605497d4590) map[count:4 firstTimestamp:2025-11-05T06:32:59Z lastTimestamp:2025-11-05T06:33:03Z reason:BackOff]}" time="2025-11-05T06:33:04Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:d26fe52dfe namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1_openshift-etcd(bffa02fecf39ef8047c86605497d4590) map[count:5 firstTimestamp:2025-11-05T06:32:59Z lastTimestamp:2025-11-05T06:33:04Z reason:BackOff]}" time="2025-11-05T06:33:06Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-84cdcc6795-zg568]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T06:32:51Z lastTimestamp:2025-11-05T06:33:06Z reason:Unhealthy]}" time="2025-11-05T06:33:08Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5d07821b69 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused\nbody: \n map[count:8 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T06:33:08Z reason:ProbeError]}" time="2025-11-05T06:33:08Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d07f8fa06c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused map[count:8 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T06:33:08Z reason:Unhealthy]}" time="2025-11-05T06:33:08Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5d07821b69 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused\nbody: \n map[count:9 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T06:33:08Z reason:ProbeError]}" time="2025-11-05T06:33:08Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d07f8fa06c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused map[count:9 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T06:33:08Z reason:Unhealthy]}" time="2025-11-05T06:33:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-84cdcc6795-zg568]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T06:32:51Z lastTimestamp:2025-11-05T06:33:11Z reason:Unhealthy]}" I1105 06:33:13.772253 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:33:16Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-84cdcc6795-zg568]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T06:32:51Z lastTimestamp:2025-11-05T06:33:16Z reason:Unhealthy]}" time="2025-11-05T06:33:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-84cdcc6795-zg568]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T06:32:51Z lastTimestamp:2025-11-05T06:33:21Z reason:Unhealthy]}" time="2025-11-05T06:33:26Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-84cdcc6795-zg568]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T06:32:51Z lastTimestamp:2025-11-05T06:33:26Z reason:Unhealthy]}" time="2025-11-05T06:33:31Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-84cdcc6795-zg568]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T06:32:51Z lastTimestamp:2025-11-05T06:33:31Z reason:Unhealthy]}" time="2025-11-05T06:33:33Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-5c6656b6fd-4j82d]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:33:34Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-7cf6d99599-8jf7f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T06:33:34Z lastTimestamp:2025-11-05T06:33:34Z reason:Unhealthy]}" time="2025-11-05T06:33:36Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-84cdcc6795-zg568]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T06:32:51Z lastTimestamp:2025-11-05T06:33:36Z reason:Unhealthy]}" time="2025-11-05T06:33:39Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-7cf6d99599-8jf7f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T06:33:34Z lastTimestamp:2025-11-05T06:33:39Z reason:Unhealthy]}" time="2025-11-05T06:33:44Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-7cf6d99599-8jf7f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T06:33:34Z lastTimestamp:2025-11-05T06:33:44Z reason:Unhealthy]}" time="2025-11-05T06:33:49Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-7cf6d99599-8jf7f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T06:33:34Z lastTimestamp:2025-11-05T06:33:49Z reason:Unhealthy]}" time="2025-11-05T06:33:54Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-7cf6d99599-8jf7f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T06:33:34Z lastTimestamp:2025-11-05T06:33:54Z reason:Unhealthy]}" time="2025-11-05T06:33:55Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-78bc654c8b-mkwvr]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:33:56Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-8645679b75-c7mph]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T06:33:56Z lastTimestamp:2025-11-05T06:33:56Z reason:Unhealthy]}" time="2025-11-05T06:33:59Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-7cf6d99599-8jf7f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T06:33:34Z lastTimestamp:2025-11-05T06:33:59Z reason:Unhealthy]}" time="2025-11-05T06:34:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-8645679b75-c7mph]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T06:33:56Z lastTimestamp:2025-11-05T06:34:01Z reason:Unhealthy]}" time="2025-11-05T06:34:04Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-7cf6d99599-8jf7f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T06:33:34Z lastTimestamp:2025-11-05T06:34:04Z reason:Unhealthy]}" time="2025-11-05T06:34:06Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-8645679b75-c7mph]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T06:33:56Z lastTimestamp:2025-11-05T06:34:06Z reason:Unhealthy]}" time="2025-11-05T06:34:09Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-7cf6d99599-8jf7f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T06:33:34Z lastTimestamp:2025-11-05T06:34:09Z reason:Unhealthy]}" time="2025-11-05T06:34:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-8645679b75-c7mph]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T06:33:56Z lastTimestamp:2025-11-05T06:34:11Z reason:Unhealthy]}" I1105 06:34:14.023659 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:34:14Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-7cf6d99599-8jf7f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T06:33:34Z lastTimestamp:2025-11-05T06:34:14Z reason:Unhealthy]}" time="2025-11-05T06:34:16Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-8645679b75-c7mph]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T06:33:56Z lastTimestamp:2025-11-05T06:34:16Z reason:Unhealthy]}" time="2025-11-05T06:34:19Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-7cf6d99599-8jf7f]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T06:33:34Z lastTimestamp:2025-11-05T06:34:19Z reason:Unhealthy]}" time="2025-11-05T06:34:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-8645679b75-c7mph]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T06:33:56Z lastTimestamp:2025-11-05T06:34:21Z reason:Unhealthy]}" time="2025-11-05T06:34:24Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2b6d04ae19 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-7cf6d99599-8jf7f]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.137:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.137:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:34:24Z lastTimestamp:2025-11-05T06:34:24Z reason:ProbeError]}" time="2025-11-05T06:34:24Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:317c278b7e namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-7cf6d99599-8jf7f]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.137:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.137:8443: connect: connection refused map[firstTimestamp:2025-11-05T06:34:24Z lastTimestamp:2025-11-05T06:34:24Z reason:Unhealthy]}" time="2025-11-05T06:34:26Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-8645679b75-c7mph]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T06:33:56Z lastTimestamp:2025-11-05T06:34:26Z reason:Unhealthy]}" time="2025-11-05T06:34:29Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2b6d04ae19 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-7cf6d99599-8jf7f]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.137:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.137:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:34:24Z lastTimestamp:2025-11-05T06:34:29Z reason:ProbeError]}" time="2025-11-05T06:34:29Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:317c278b7e namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-7cf6d99599-8jf7f]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.137:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.137:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:34:24Z lastTimestamp:2025-11-05T06:34:29Z reason:Unhealthy]}" time="2025-11-05T06:34:31Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-5b9bb8d494-j6jcn]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:34:31Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-8645679b75-c7mph]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T06:33:56Z lastTimestamp:2025-11-05T06:34:31Z reason:Unhealthy]}" time="2025-11-05T06:34:34Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2b6d04ae19 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-7cf6d99599-8jf7f]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.137:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.137:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T06:34:24Z lastTimestamp:2025-11-05T06:34:34Z reason:ProbeError]}" time="2025-11-05T06:34:36Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-8645679b75-c7mph]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T06:33:56Z lastTimestamp:2025-11-05T06:34:36Z reason:Unhealthy]}" time="2025-11-05T06:34:41Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-8645679b75-c7mph]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T06:33:56Z lastTimestamp:2025-11-05T06:34:41Z reason:Unhealthy]}" time="2025-11-05T06:34:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:55cd8a843d namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-8645679b75-c7mph]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.132:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.132:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:34:46Z lastTimestamp:2025-11-05T06:34:46Z reason:ProbeError]}" time="2025-11-05T06:34:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:f7e5e4058a namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-8645679b75-c7mph]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.132:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.132:8443: connect: connection refused map[firstTimestamp:2025-11-05T06:34:46Z lastTimestamp:2025-11-05T06:34:46Z reason:Unhealthy]}" time="2025-11-05T06:34:52Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-78bc654c8b-btw9k]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:34:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-8645679b75-w2ss9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T06:34:53Z lastTimestamp:2025-11-05T06:34:53Z reason:Unhealthy]}" time="2025-11-05T06:34:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-8645679b75-w2ss9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T06:34:53Z lastTimestamp:2025-11-05T06:34:58Z reason:Unhealthy]}" time="2025-11-05T06:35:03Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-8645679b75-w2ss9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T06:34:53Z lastTimestamp:2025-11-05T06:35:03Z reason:Unhealthy]}" time="2025-11-05T06:35:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-8645679b75-w2ss9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T06:34:53Z lastTimestamp:2025-11-05T06:35:08Z reason:Unhealthy]}" time="2025-11-05T06:35:12Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:12Z reason:ProbeError]}" time="2025-11-05T06:35:12Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:12Z reason:Unhealthy]}" time="2025-11-05T06:35:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-8645679b75-w2ss9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T06:34:53Z lastTimestamp:2025-11-05T06:35:13Z reason:Unhealthy]}" I1105 06:35:14.279727 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:35:17Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:17Z reason:ProbeError]}" time="2025-11-05T06:35:17Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:17Z reason:Unhealthy]}" time="2025-11-05T06:35:18Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-8645679b75-w2ss9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T06:34:53Z lastTimestamp:2025-11-05T06:35:18Z reason:Unhealthy]}" time="2025-11-05T06:35:22Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:22Z reason:ProbeError]}" time="2025-11-05T06:35:22Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:3 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:22Z reason:Unhealthy]}" time="2025-11-05T06:35:22Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:4 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:22Z reason:ProbeError]}" time="2025-11-05T06:35:22Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:4 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:22Z reason:Unhealthy]}" time="2025-11-05T06:35:23Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:16 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T06:35:23Z reason:ProbeError]}" time="2025-11-05T06:35:23Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:16 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T06:35:23Z reason:Unhealthy]}" time="2025-11-05T06:35:23Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-8645679b75-w2ss9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T06:34:53Z lastTimestamp:2025-11-05T06:35:23Z reason:Unhealthy]}" time="2025-11-05T06:35:27Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:5 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:27Z reason:ProbeError]}" time="2025-11-05T06:35:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:5 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:27Z reason:Unhealthy]}" time="2025-11-05T06:35:28Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:17 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T06:35:28Z reason:ProbeError]}" time="2025-11-05T06:35:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:17 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T06:35:28Z reason:Unhealthy]}" time="2025-11-05T06:35:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-8645679b75-w2ss9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T06:34:53Z lastTimestamp:2025-11-05T06:35:28Z reason:Unhealthy]}" time="2025-11-05T06:35:32Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:6 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:32Z reason:ProbeError]}" time="2025-11-05T06:35:32Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:6 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:32Z reason:Unhealthy]}" time="2025-11-05T06:35:33Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:18 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T06:35:33Z reason:ProbeError]}" time="2025-11-05T06:35:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:18 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T06:35:33Z reason:Unhealthy]}" time="2025-11-05T06:35:33Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:19 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T06:35:33Z reason:ProbeError]}" time="2025-11-05T06:35:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:19 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T06:35:33Z reason:Unhealthy]}" time="2025-11-05T06:35:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-8645679b75-w2ss9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T06:34:53Z lastTimestamp:2025-11-05T06:35:33Z reason:Unhealthy]}" time="2025-11-05T06:35:34Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-5b9bb8d494-fl2hr]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:35:37Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7c4bd569d6-2hmmb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T06:35:37Z lastTimestamp:2025-11-05T06:35:37Z reason:Unhealthy]}" time="2025-11-05T06:35:37Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:7 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:37Z reason:ProbeError]}" time="2025-11-05T06:35:37Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:7 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:37Z reason:Unhealthy]}" time="2025-11-05T06:35:38Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:20 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T06:35:38Z reason:ProbeError]}" time="2025-11-05T06:35:38Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-8645679b75-w2ss9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T06:34:53Z lastTimestamp:2025-11-05T06:35:38Z reason:Unhealthy]}" time="2025-11-05T06:35:39Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/0a413250-28ef-4107-875d-ab63b87dcdeb container/etcd mirror-uid/110aafcbe9a977fbcf2fef5708823de4" time="2025-11-05T06:35:39Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/0a413250-28ef-4107-875d-ab63b87dcdeb container/etcd mirror-uid/110aafcbe9a977fbcf2fef5708823de4" time="2025-11-05T06:35:40Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/0a413250-28ef-4107-875d-ab63b87dcdeb container/etcd mirror-uid/110aafcbe9a977fbcf2fef5708823de4" time="2025-11-05T06:35:41Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/0a413250-28ef-4107-875d-ab63b87dcdeb container/etcd mirror-uid/110aafcbe9a977fbcf2fef5708823de4" time="2025-11-05T06:35:42Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7c4bd569d6-2hmmb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T06:35:37Z lastTimestamp:2025-11-05T06:35:42Z reason:Unhealthy]}" time="2025-11-05T06:35:42Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:8 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:42Z reason:ProbeError]}" time="2025-11-05T06:35:42Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:8 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:42Z reason:Unhealthy]}" time="2025-11-05T06:35:42Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/0a413250-28ef-4107-875d-ab63b87dcdeb container/etcd mirror-uid/110aafcbe9a977fbcf2fef5708823de4" time="2025-11-05T06:35:43Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/0a413250-28ef-4107-875d-ab63b87dcdeb container/etcd mirror-uid/110aafcbe9a977fbcf2fef5708823de4" time="2025-11-05T06:35:43Z" level=info msg="event interval matches ProbeErrorConnectionRefused" locator="{Kind map[hmsg:2081bddc77 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-8645679b75-w2ss9]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.105:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.105:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:35:43Z lastTimestamp:2025-11-05T06:35:43Z reason:ProbeError]}" time="2025-11-05T06:35:43Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:2bd18508e1 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-8645679b75-w2ss9]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.105:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.105:8443: connect: connection refused map[firstTimestamp:2025-11-05T06:35:43Z lastTimestamp:2025-11-05T06:35:43Z reason:Unhealthy]}" time="2025-11-05T06:35:44Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/0a413250-28ef-4107-875d-ab63b87dcdeb container/etcd mirror-uid/110aafcbe9a977fbcf2fef5708823de4" time="2025-11-05T06:35:45Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/0a413250-28ef-4107-875d-ab63b87dcdeb container/etcd mirror-uid/110aafcbe9a977fbcf2fef5708823de4" time="2025-11-05T06:35:46Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/0a413250-28ef-4107-875d-ab63b87dcdeb container/etcd mirror-uid/110aafcbe9a977fbcf2fef5708823de4" time="2025-11-05T06:35:47Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7c4bd569d6-2hmmb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T06:35:37Z lastTimestamp:2025-11-05T06:35:47Z reason:Unhealthy]}" time="2025-11-05T06:35:47Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:9 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:47Z reason:ProbeError]}" time="2025-11-05T06:35:47Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:9 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:47Z reason:Unhealthy]}" time="2025-11-05T06:35:47Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/0a413250-28ef-4107-875d-ab63b87dcdeb container/etcd mirror-uid/110aafcbe9a977fbcf2fef5708823de4" time="2025-11-05T06:35:48Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/0a413250-28ef-4107-875d-ab63b87dcdeb container/etcd mirror-uid/110aafcbe9a977fbcf2fef5708823de4" time="2025-11-05T06:35:49Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:35:50Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-st5gm]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T06:35:50Z lastTimestamp:2025-11-05T06:35:50Z reason:Unhealthy]}" time="2025-11-05T06:35:50Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-78bc654c8b-whkvl]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:35:50Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:35:51Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:35:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7c4bd569d6-2hmmb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T06:35:37Z lastTimestamp:2025-11-05T06:35:52Z reason:Unhealthy]}" time="2025-11-05T06:35:52Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:35:52Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:10 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:52Z reason:ProbeError]}" time="2025-11-05T06:35:52Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:10 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:35:52Z reason:Unhealthy]}" time="2025-11-05T06:35:53Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:35:54Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:35:55Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-st5gm]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T06:35:50Z lastTimestamp:2025-11-05T06:35:55Z reason:Unhealthy]}" time="2025-11-05T06:35:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7c4bd569d6-2hmmb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T06:35:37Z lastTimestamp:2025-11-05T06:35:57Z reason:Unhealthy]}" time="2025-11-05T06:36:00Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-st5gm]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T06:35:50Z lastTimestamp:2025-11-05T06:36:00Z reason:Unhealthy]}" time="2025-11-05T06:36:02Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7c4bd569d6-2hmmb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T06:35:37Z lastTimestamp:2025-11-05T06:36:02Z reason:Unhealthy]}" time="2025-11-05T06:36:02Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:6794c43155 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": context deadline exceeded\nbody: \n map[firstTimestamp:2025-11-05T06:36:02Z lastTimestamp:2025-11-05T06:36:02Z reason:ProbeError]}" time="2025-11-05T06:36:02Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:86048ad0e7 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": context deadline exceeded map[firstTimestamp:2025-11-05T06:36:02Z lastTimestamp:2025-11-05T06:36:02Z reason:Unhealthy]}" time="2025-11-05T06:36:05Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-st5gm]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T06:35:50Z lastTimestamp:2025-11-05T06:36:05Z reason:Unhealthy]}" time="2025-11-05T06:36:07Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7c4bd569d6-2hmmb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T06:35:37Z lastTimestamp:2025-11-05T06:36:07Z reason:Unhealthy]}" time="2025-11-05T06:36:07Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:74091a054f namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nbody: \n map[firstTimestamp:2025-11-05T06:36:07Z lastTimestamp:2025-11-05T06:36:07Z reason:ProbeError]}" time="2025-11-05T06:36:07Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:1ac69da92c namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers) map[firstTimestamp:2025-11-05T06:36:07Z lastTimestamp:2025-11-05T06:36:07Z reason:Unhealthy]}" time="2025-11-05T06:36:10Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-st5gm]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T06:35:50Z lastTimestamp:2025-11-05T06:36:10Z reason:Unhealthy]}" time="2025-11-05T06:36:12Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7c4bd569d6-2hmmb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T06:35:37Z lastTimestamp:2025-11-05T06:36:12Z reason:Unhealthy]}" time="2025-11-05T06:36:12Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:90427cd033 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[firstTimestamp:2025-11-05T06:36:12Z lastTimestamp:2025-11-05T06:36:12Z reason:ProbeError]}" time="2025-11-05T06:36:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-st5gm]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T06:35:50Z lastTimestamp:2025-11-05T06:36:15Z reason:Unhealthy]}" I1105 06:36:15.614412 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:36:17Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7c4bd569d6-2hmmb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T06:35:37Z lastTimestamp:2025-11-05T06:36:17Z reason:Unhealthy]}" time="2025-11-05T06:36:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-st5gm]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T06:35:50Z lastTimestamp:2025-11-05T06:36:20Z reason:Unhealthy]}" time="2025-11-05T06:36:22Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7c4bd569d6-2hmmb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T06:35:37Z lastTimestamp:2025-11-05T06:36:22Z reason:Unhealthy]}" time="2025-11-05T06:36:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-st5gm]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T06:35:50Z lastTimestamp:2025-11-05T06:36:25Z reason:Unhealthy]}" time="2025-11-05T06:36:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:68066e3f1d namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7c4bd569d6-2hmmb]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.126:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.126:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:36:27Z lastTimestamp:2025-11-05T06:36:27Z reason:ProbeError]}" time="2025-11-05T06:36:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:c88633ee7b namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7c4bd569d6-2hmmb]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.126:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.126:8443: connect: connection refused map[firstTimestamp:2025-11-05T06:36:27Z lastTimestamp:2025-11-05T06:36:27Z reason:Unhealthy]}" time="2025-11-05T06:36:32Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:68066e3f1d namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7c4bd569d6-2hmmb]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.126:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.126:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:36:27Z lastTimestamp:2025-11-05T06:36:32Z reason:ProbeError]}" time="2025-11-05T06:36:32Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:c88633ee7b namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-7c4bd569d6-2hmmb]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.126:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.126:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:36:27Z lastTimestamp:2025-11-05T06:36:32Z reason:Unhealthy]}" time="2025-11-05T06:36:35Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-st5gm]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T06:35:50Z lastTimestamp:2025-11-05T06:36:30Z reason:Unhealthy]}" time="2025-11-05T06:36:35Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-st5gm]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T06:35:50Z lastTimestamp:2025-11-05T06:36:35Z reason:Unhealthy]}" time="2025-11-05T06:36:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-8645679b75-st5gm]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:11 firstTimestamp:2025-11-05T06:35:50Z lastTimestamp:2025-11-05T06:36:40Z reason:Unhealthy]}" I1105 06:37:15.899107 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:37:28Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[count:241 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T06:37:28Z reason:ProbeError]}" time="2025-11-05T06:37:28Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:37:28Z lastTimestamp:2025-11-05T06:37:28Z reason:ProbeError]}" time="2025-11-05T06:37:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:feccdf558f namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused map[firstTimestamp:2025-11-05T06:37:28Z lastTimestamp:2025-11-05T06:37:28Z reason:Unhealthy]}" time="2025-11-05T06:37:35Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-5b9bb8d494-sp494]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:37:38Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5c6656b6fd-r28cv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T06:37:38Z lastTimestamp:2025-11-05T06:37:38Z reason:Unhealthy]}" time="2025-11-05T06:37:43Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5c6656b6fd-r28cv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T06:37:38Z lastTimestamp:2025-11-05T06:37:43Z reason:Unhealthy]}" time="2025-11-05T06:37:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5c6656b6fd-r28cv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T06:37:38Z lastTimestamp:2025-11-05T06:37:48Z reason:Unhealthy]}" time="2025-11-05T06:37:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5c6656b6fd-r28cv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T06:37:38Z lastTimestamp:2025-11-05T06:37:53Z reason:Unhealthy]}" time="2025-11-05T06:37:55Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/1a97740c-5b19-4684-89d5-fd2cc2cfb98e container/etcd mirror-uid/1fe98e6d910bffc16bfc1517c2f4fe16" time="2025-11-05T06:37:56Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/1a97740c-5b19-4684-89d5-fd2cc2cfb98e container/etcd mirror-uid/1fe98e6d910bffc16bfc1517c2f4fe16" time="2025-11-05T06:37:57Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/1a97740c-5b19-4684-89d5-fd2cc2cfb98e container/etcd mirror-uid/1fe98e6d910bffc16bfc1517c2f4fe16" time="2025-11-05T06:37:58Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/1a97740c-5b19-4684-89d5-fd2cc2cfb98e container/etcd mirror-uid/1fe98e6d910bffc16bfc1517c2f4fe16" time="2025-11-05T06:37:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5c6656b6fd-r28cv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T06:37:38Z lastTimestamp:2025-11-05T06:37:58Z reason:Unhealthy]}" time="2025-11-05T06:37:59Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/1a97740c-5b19-4684-89d5-fd2cc2cfb98e container/etcd mirror-uid/1fe98e6d910bffc16bfc1517c2f4fe16" time="2025-11-05T06:38:00Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/1a97740c-5b19-4684-89d5-fd2cc2cfb98e container/etcd mirror-uid/1fe98e6d910bffc16bfc1517c2f4fe16" time="2025-11-05T06:38:01Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/1a97740c-5b19-4684-89d5-fd2cc2cfb98e container/etcd mirror-uid/1fe98e6d910bffc16bfc1517c2f4fe16" time="2025-11-05T06:38:02Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/1a97740c-5b19-4684-89d5-fd2cc2cfb98e container/etcd mirror-uid/1fe98e6d910bffc16bfc1517c2f4fe16" time="2025-11-05T06:38:02Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/0fc16ba7-91ef-4ef6-aad9-f153ad509ff2 container/etcd mirror-uid/ab375631154327a1ec5a1ec01d416109" time="2025-11-05T06:38:03Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5c6656b6fd-r28cv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T06:37:38Z lastTimestamp:2025-11-05T06:38:03Z reason:Unhealthy]}" time="2025-11-05T06:38:03Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[count:249 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T06:38:03Z reason:ProbeError]}" time="2025-11-05T06:38:03Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/0fc16ba7-91ef-4ef6-aad9-f153ad509ff2 container/etcd mirror-uid/ab375631154327a1ec5a1ec01d416109" time="2025-11-05T06:38:04Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/0fc16ba7-91ef-4ef6-aad9-f153ad509ff2 container/etcd mirror-uid/ab375631154327a1ec5a1ec01d416109" time="2025-11-05T06:38:05Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/0fc16ba7-91ef-4ef6-aad9-f153ad509ff2 container/etcd mirror-uid/ab375631154327a1ec5a1ec01d416109" time="2025-11-05T06:38:06Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/0fc16ba7-91ef-4ef6-aad9-f153ad509ff2 container/etcd mirror-uid/ab375631154327a1ec5a1ec01d416109" time="2025-11-05T06:38:07Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/0fc16ba7-91ef-4ef6-aad9-f153ad509ff2 container/etcd mirror-uid/ab375631154327a1ec5a1ec01d416109" time="2025-11-05T06:38:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5c6656b6fd-r28cv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T06:37:38Z lastTimestamp:2025-11-05T06:38:08Z reason:Unhealthy]}" time="2025-11-05T06:38:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5c6656b6fd-r28cv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T06:37:38Z lastTimestamp:2025-11-05T06:38:13Z reason:Unhealthy]}" I1105 06:38:16.150648 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:38:18Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5c6656b6fd-r28cv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T06:37:38Z lastTimestamp:2025-11-05T06:38:18Z reason:Unhealthy]}" time="2025-11-05T06:38:23Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5c6656b6fd-r28cv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T06:37:38Z lastTimestamp:2025-11-05T06:38:23Z reason:Unhealthy]}" time="2025-11-05T06:38:25Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:76 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T06:38:25Z reason:ProbeError]}" time="2025-11-05T06:38:28Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e0de7bc184 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5c6656b6fd-r28cv]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.106:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.106:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:38:28Z lastTimestamp:2025-11-05T06:38:28Z reason:ProbeError]}" time="2025-11-05T06:38:28Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:9be3a4d82f namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5c6656b6fd-r28cv]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.106:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.106:8443: connect: connection refused map[firstTimestamp:2025-11-05T06:38:28Z lastTimestamp:2025-11-05T06:38:28Z reason:Unhealthy]}" time="2025-11-05T06:38:33Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e0de7bc184 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5c6656b6fd-r28cv]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.106:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.106:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T06:38:28Z lastTimestamp:2025-11-05T06:38:33Z reason:ProbeError]}" time="2025-11-05T06:38:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:9be3a4d82f namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5c6656b6fd-r28cv]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.106:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.106:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T06:38:28Z lastTimestamp:2025-11-05T06:38:33Z reason:Unhealthy]}" I1105 06:39:16.405167 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:39:48Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:82fc931fbf namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 403\nbody: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/readyz\\\"\",\"reason\":\"Forbidden\",\"details\":{},\"code\":403}\n\n map[count:5 firstTimestamp:2025-11-05T04:20:45Z lastTimestamp:2025-11-05T06:39:48Z reason:ProbeError]}" time="2025-11-05T06:39:50Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:39:50Z lastTimestamp:2025-11-05T06:39:50Z reason:ProbeError]}" time="2025-11-05T06:39:50Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[firstTimestamp:2025-11-05T06:39:50Z lastTimestamp:2025-11-05T06:39:50Z reason:Unhealthy]}" time="2025-11-05T06:39:54Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:134 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T06:39:54Z reason:ProbeError]}" time="2025-11-05T06:39:54Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:134 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T06:39:54Z reason:Unhealthy]}" I1105 06:40:16.691785 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 06:40:17.065274 1669 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go@v0.34.1/tools/cache/reflector.go:290" type="*v1.Event" err="Internal error occurred: etcdserver: no leader" time="2025-11-05T06:40:19Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:40:20Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:40:21Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:40:22Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:40:23Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:40:24Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:40:25Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:40:26Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:40:27Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:40:28Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:40:29Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:40:30Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/78bb197e-9f46-4055-bdfc-143cc5e2e8c3 container/etcd mirror-uid/bffa02fecf39ef8047c86605497d4590" time="2025-11-05T06:40:30Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4c0a10b8-2160-4920-a6e0-08708be67bfc container/etcd mirror-uid/1722166c307c85ad5842516eecf65990" time="2025-11-05T06:40:31Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4c0a10b8-2160-4920-a6e0-08708be67bfc container/etcd mirror-uid/1722166c307c85ad5842516eecf65990" time="2025-11-05T06:40:32Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4c0a10b8-2160-4920-a6e0-08708be67bfc container/etcd mirror-uid/1722166c307c85ad5842516eecf65990" time="2025-11-05T06:40:33Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4c0a10b8-2160-4920-a6e0-08708be67bfc container/etcd mirror-uid/1722166c307c85ad5842516eecf65990" time="2025-11-05T06:40:34Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4c0a10b8-2160-4920-a6e0-08708be67bfc container/etcd mirror-uid/1722166c307c85ad5842516eecf65990" I1105 06:41:16.975677 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:41:32Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[firstTimestamp:2025-11-05T06:41:32Z lastTimestamp:2025-11-05T06:41:32Z reason:ProbeError]}" time="2025-11-05T06:42:07Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:9 firstTimestamp:2025-11-05T06:41:32Z lastTimestamp:2025-11-05T06:42:07Z reason:ProbeError]}" I1105 06:42:19.341140 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 06:43:19.613899 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 06:44:19.894151 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 06:45:20.146001 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' passed: (25m13s) 2025-11-05T06:45:51 "[sig-etcd][Feature:DisasterRecovery][Suite:openshift/etcd/recovery][Timeout:2h] [Feature:EtcdRecovery][Disruptive] Recover with snapshot with two unhealthy nodes and lost quorum [Serial]" started: 22/45/55 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:ManagedBootImagesvSphere][Serial] Should update boot images on all MachineSets when configured [apigroup:machineconfiguration.openshift.io]" skip [github.com/openshift/origin/test/extended/machine_config/helpers.go:56]: This test only applies to VSphere platform skipped: (7s) 2025-11-05T06:45:59 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:ManagedBootImagesvSphere][Serial] Should update boot images on all MachineSets when configured [apigroup:machineconfiguration.openshift.io]" started: 22/46/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO password PolarionID:59417-MCD create/update password with MachineConfig in CoreOS nodes" I1105 06:46:20.399992 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:46:28Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:2daf949415 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-6d7b946c3cd5c38f74e521896bf893b8 map[firstTimestamp:2025-11-05T06:46:28Z lastTimestamp:2025-11-05T06:46:28Z reason:SetDesiredConfig]}" time="2025-11-05T06:46:54Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:dfb00f573f machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt to MachineConfig: rendered-worker-6d7b946c3cd5c38f74e521896bf893b8 map[firstTimestamp:2025-11-05T06:46:54Z lastTimestamp:2025-11-05T06:46:54Z reason:SetDesiredConfig]}" I1105 06:47:20.640834 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:47:25Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:e0b10a2379 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr to MachineConfig: rendered-worker-6d7b946c3cd5c38f74e521896bf893b8 map[firstTimestamp:2025-11-05T06:47:25Z lastTimestamp:2025-11-05T06:47:25Z reason:SetDesiredConfig]}" I1105 06:48:20.924640 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:48:42Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:86b1eccff2 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-3cddc843d7fc0d160701d1f23deca655 map[firstTimestamp:2025-11-05T06:48:42Z lastTimestamp:2025-11-05T06:48:42Z reason:SetDesiredConfig]}" time="2025-11-05T06:49:09Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:ede6bfce6c machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt to MachineConfig: rendered-worker-3cddc843d7fc0d160701d1f23deca655 map[firstTimestamp:2025-11-05T06:49:09Z lastTimestamp:2025-11-05T06:49:09Z reason:SetDesiredConfig]}" I1105 06:49:21.177320 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:49:40Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:1a6b9f38f1 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr to MachineConfig: rendered-worker-3cddc843d7fc0d160701d1f23deca655 map[firstTimestamp:2025-11-05T06:49:40Z lastTimestamp:2025-11-05T06:49:40Z reason:SetDesiredConfig]}" I1105 06:50:21.475069 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:50:45Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:83768cdc76 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[firstTimestamp:2025-11-05T06:50:45Z lastTimestamp:2025-11-05T06:50:45Z reason:SetDesiredConfig]}" time="2025-11-05T06:51:11Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:66d66c84b6 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[firstTimestamp:2025-11-05T06:51:11Z lastTimestamp:2025-11-05T06:51:11Z reason:SetDesiredConfig]}" I1105 06:51:21.724807 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:51:43Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:16a31e5783 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[firstTimestamp:2025-11-05T06:51:43Z lastTimestamp:2025-11-05T06:51:43Z reason:SetDesiredConfig]}" time="2025-11-05T06:52:11Z" level=info msg="event interval matches MarketplaceStartupProbeFailure" locator="{Kind map[hmsg:d25e6fe1ef namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:redhat-operators-d2ddd]}" message="{Unhealthy Startup probe failed: timeout: failed to connect service \":50051\" within 1s\n map[firstTimestamp:2025-11-05T06:52:11Z lastTimestamp:2025-11-05T06:52:11Z reason:Unhealthy]}" I1105 06:52:22.001238 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' passed: (6m50s) 2025-11-05T06:52:50 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO password PolarionID:59417-MCD create/update password with MachineConfig in CoreOS nodes" started: 22/47/55 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:ManagedBootImagesvSphere][Serial] Should update boot images only on MachineSets that are opted in [apigroup:machineconfiguration.openshift.io]" skip [github.com/openshift/origin/test/extended/machine_config/helpers.go:56]: This test only applies to VSphere platform skipped: (5.3s) 2025-11-05T06:52:57 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:ManagedBootImagesvSphere][Serial] Should update boot images only on MachineSets that are opted in [apigroup:machineconfiguration.openshift.io]" started: 22/48/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][OCPFeatureGate:ManagedBootImagesAzure] [Disruptive] Should update boot images on an Azure MachineSets with a legacy boot image and scale successfully [apigroup:machineconfiguration.openshift.io]" skip [github.com/openshift/machine-config-operator/test/extended/boot_image.go:40]: This test only applies to Azure platform skipped: (7.7s) 2025-11-05T06:53:05 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][OCPFeatureGate:ManagedBootImagesAzure] [Disruptive] Should update boot images on an Azure MachineSets with a legacy boot image and scale successfully [apigroup:machineconfiguration.openshift.io]" started: 22/49/55 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:PinnedImages][Disruptive] [Slow]All Nodes in a custom Pool should have the PinnedImages even after Garbage Collection [apigroup:machineconfiguration.openshift.io] [Serial]" time="2025-11-05T06:53:16Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:099d4d3fd3 machineconfigpool:custom namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-custom-68e6c340dbef76691f081bbf7159850a map[firstTimestamp:2025-11-05T06:53:16Z lastTimestamp:2025-11-05T06:53:16Z reason:SetDesiredConfig]}" I1105 06:53:22.276574 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:54:07Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:8a9ad3b296 machineconfigpool:custom namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-custom-470fc9301258511084321ae512644615 map[firstTimestamp:2025-11-05T06:54:07Z lastTimestamp:2025-11-05T06:54:07Z reason:SetDesiredConfig]}" I1105 06:54:22.550795 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:55:19Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[firstTimestamp:2025-11-05T06:55:19Z lastTimestamp:2025-11-05T06:55:19Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T06:55:19Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-4k6zx]}" message="{NodeNotReady Node is not ready map[firstTimestamp:2025-11-05T06:55:19Z lastTimestamp:2025-11-05T06:55:19Z reason:NodeNotReady]}" time="2025-11-05T06:55:20Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:2 firstTimestamp:2025-11-05T06:55:19Z lastTimestamp:2025-11-05T06:55:20Z reason:TopologyAwareHintsDisabled]}" I1105 06:55:22.894585 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:55:30Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T06:55:30Z lastTimestamp:2025-11-05T06:55:30Z reason:NodeHasSufficientMemory roles:custom,worker]}" time="2025-11-05T06:55:30Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T06:55:30Z lastTimestamp:2025-11-05T06:55:30Z reason:NodeHasNoDiskPressure roles:custom,worker]}" time="2025-11-05T06:55:30Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T06:55:30Z lastTimestamp:2025-11-05T06:55:30Z reason:NodeHasSufficientPID roles:custom,worker]}" time="2025-11-05T06:55:30Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T06:55:30Z lastTimestamp:2025-11-05T06:55:30Z reason:NodeHasSufficientMemory roles:custom,worker]}" time="2025-11-05T06:55:30Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T06:55:30Z lastTimestamp:2025-11-05T06:55:30Z reason:NodeHasNoDiskPressure roles:custom,worker]}" time="2025-11-05T06:55:30Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T06:55:30Z lastTimestamp:2025-11-05T06:55:30Z reason:NodeHasSufficientPID roles:custom,worker]}" time="2025-11-05T06:55:30Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T06:55:30Z lastTimestamp:2025-11-05T06:55:30Z reason:NodeHasSufficientMemory roles:custom,worker]}" time="2025-11-05T06:55:31Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T06:55:30Z lastTimestamp:2025-11-05T06:55:30Z reason:NodeHasNoDiskPressure roles:custom,worker]}" time="2025-11-05T06:55:31Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T06:55:30Z lastTimestamp:2025-11-05T06:55:30Z reason:NodeHasSufficientPID roles:custom,worker]}" time="2025-11-05T06:55:31Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[count:4 firstTimestamp:2025-11-05T06:55:30Z lastTimestamp:2025-11-05T06:55:30Z reason:NodeHasSufficientMemory roles:custom,worker]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[count:4 firstTimestamp:2025-11-05T06:55:30Z lastTimestamp:2025-11-05T06:55:30Z reason:NodeHasNoDiskPressure roles:custom,worker]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[count:4 firstTimestamp:2025-11-05T06:55:30Z lastTimestamp:2025-11-05T06:55:30Z reason:NodeHasSufficientPID roles:custom,worker]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:31Z reason:NetworkNotReady]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:31Z reason:FailedMount]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:31Z reason:FailedMount]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:31Z reason:FailedMount]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:31Z reason:FailedMount]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:3 firstTimestamp:2025-11-05T06:55:19Z lastTimestamp:2025-11-05T06:55:31Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:31Z reason:FailedMount]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:31Z reason:FailedMount]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:31Z reason:FailedMount]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:31Z reason:FailedMount]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:4 firstTimestamp:2025-11-05T06:55:19Z lastTimestamp:2025-11-05T06:55:32Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:32Z reason:FailedMount]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:32Z reason:FailedMount]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:32Z reason:FailedMount]}" time="2025-11-05T06:55:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:32Z reason:FailedMount]}" time="2025-11-05T06:55:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:33Z reason:NetworkNotReady]}" time="2025-11-05T06:55:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:34Z reason:FailedMount]}" time="2025-11-05T06:55:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:34Z reason:FailedMount]}" time="2025-11-05T06:55:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:34Z reason:FailedMount]}" time="2025-11-05T06:55:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:34Z reason:FailedMount]}" time="2025-11-05T06:55:35Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:35Z reason:NetworkNotReady]}" time="2025-11-05T06:55:37Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:37Z reason:NetworkNotReady]}" time="2025-11-05T06:55:38Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:38Z reason:FailedMount]}" time="2025-11-05T06:55:38Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:38Z reason:FailedMount]}" time="2025-11-05T06:55:38Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:38Z reason:FailedMount]}" time="2025-11-05T06:55:38Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:38Z reason:FailedMount]}" time="2025-11-05T06:55:39Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T06:55:31Z lastTimestamp:2025-11-05T06:55:39Z reason:NetworkNotReady]}" time="2025-11-05T06:55:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:064786e2fe namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.3:10303/healthz\": dial tcp 10.0.128.3:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:55:46Z lastTimestamp:2025-11-05T06:55:46Z reason:ProbeError]}" time="2025-11-05T06:55:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e172d2e44c namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.3:10303/healthz\": dial tcp 10.0.128.3:10303: connect: connection refused map[firstTimestamp:2025-11-05T06:55:46Z lastTimestamp:2025-11-05T06:55:46Z reason:Unhealthy]}" time="2025-11-05T06:55:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:416a528720 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.3:10300/healthz\": dial tcp 10.0.128.3:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:55:46Z lastTimestamp:2025-11-05T06:55:46Z reason:ProbeError]}" time="2025-11-05T06:55:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:68683c9410 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.3:10300/healthz\": dial tcp 10.0.128.3:10300: connect: connection refused map[firstTimestamp:2025-11-05T06:55:46Z lastTimestamp:2025-11-05T06:55:46Z reason:Unhealthy]}" time="2025-11-05T06:55:47Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:cd29a577c1 namespace:openshift-e2e-loki pod:loki-promtail-4k6zx]}" message="{AddedInterface Add eth0 [10.131.0.3/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T06:55:47Z lastTimestamp:2025-11-05T06:55:47Z reason:AddedInterface]}" time="2025-11-05T06:55:47Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1769ebd414 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Container image \"quay.io/openshift-logging/promtail:v2.9.8\" already present on machine map[container:promtail firstTimestamp:2025-11-05T06:55:47Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T06:55:47Z reason:Pulled]}" time="2025-11-05T06:55:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T06:55:48Z lastTimestamp:2025-11-05T06:55:48Z reason:Created]}" time="2025-11-05T06:55:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T06:55:48Z lastTimestamp:2025-11-05T06:55:48Z reason:Started]}" time="2025-11-05T06:55:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ce1ec925c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Container image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" already present on machine map[container:oauth-proxy firstTimestamp:2025-11-05T06:55:48Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T06:55:48Z reason:Pulled]}" time="2025-11-05T06:55:48Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:3c6ea329ab namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 map[firstTimestamp:2025-11-05T06:55:48Z lastTimestamp:2025-11-05T06:55:48Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T06:55:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T06:55:48Z lastTimestamp:2025-11-05T06:55:48Z reason:Created]}" time="2025-11-05T06:55:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T06:55:48Z lastTimestamp:2025-11-05T06:55:48Z reason:Started]}" time="2025-11-05T06:55:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T06:55:48Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T06:55:48Z reason:Pulling]}" time="2025-11-05T06:55:49Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:3c6ea329ab namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 map[count:2 firstTimestamp:2025-11-05T06:55:48Z lastTimestamp:2025-11-05T06:55:49Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T06:55:49Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:6bd0846670 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 802ms (802ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T06:55:49Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T06:55:49Z reason:Pulled]}" time="2025-11-05T06:55:49Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T06:55:49Z lastTimestamp:2025-11-05T06:55:49Z reason:Created]}" time="2025-11-05T06:55:49Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T06:55:49Z lastTimestamp:2025-11-05T06:55:49Z reason:Started]}" time="2025-11-05T06:56:05Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:83768cdc76 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:2 firstTimestamp:2025-11-05T06:50:45Z lastTimestamp:2025-11-05T06:56:05Z reason:SetDesiredConfig]}" I1105 06:56:23.307977 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 06:57:23.597292 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:57:55Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:5 firstTimestamp:2025-11-05T06:55:19Z lastTimestamp:2025-11-05T06:57:55Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T06:57:55Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-4k6zx]}" message="{NodeNotReady Node is not ready map[count:2 firstTimestamp:2025-11-05T06:55:19Z lastTimestamp:2025-11-05T06:57:55Z reason:NodeNotReady]}" time="2025-11-05T06:58:06Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T06:58:06Z lastTimestamp:2025-11-05T06:58:06Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T06:58:06Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T06:58:06Z lastTimestamp:2025-11-05T06:58:06Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T06:58:06Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T06:58:06Z lastTimestamp:2025-11-05T06:58:06Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T06:58:06Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T06:58:06Z lastTimestamp:2025-11-05T06:58:06Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T06:58:06Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T06:58:06Z lastTimestamp:2025-11-05T06:58:06Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T06:58:06Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T06:58:06Z lastTimestamp:2025-11-05T06:58:06Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T06:58:07Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T06:58:06Z lastTimestamp:2025-11-05T06:58:06Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T06:58:07Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T06:58:06Z lastTimestamp:2025-11-05T06:58:06Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T06:58:07Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T06:58:06Z lastTimestamp:2025-11-05T06:58:06Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T06:58:08Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:07Z reason:NetworkNotReady]}" time="2025-11-05T06:58:08Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:07Z reason:FailedMount]}" time="2025-11-05T06:58:08Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:07Z reason:FailedMount]}" time="2025-11-05T06:58:08Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:07Z reason:FailedMount]}" time="2025-11-05T06:58:08Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:07Z reason:FailedMount]}" time="2025-11-05T06:58:08Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:6 firstTimestamp:2025-11-05T06:55:19Z lastTimestamp:2025-11-05T06:58:08Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T06:58:08Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:08Z reason:FailedMount]}" time="2025-11-05T06:58:08Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:08Z reason:FailedMount]}" time="2025-11-05T06:58:08Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:08Z reason:FailedMount]}" time="2025-11-05T06:58:08Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:08Z reason:FailedMount]}" time="2025-11-05T06:58:09Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:09Z reason:FailedMount]}" time="2025-11-05T06:58:09Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:09Z reason:FailedMount]}" time="2025-11-05T06:58:09Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:09Z reason:FailedMount]}" time="2025-11-05T06:58:09Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:09Z reason:FailedMount]}" time="2025-11-05T06:58:09Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:09Z reason:NetworkNotReady]}" time="2025-11-05T06:58:11Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:11Z reason:FailedMount]}" time="2025-11-05T06:58:11Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:11Z reason:FailedMount]}" time="2025-11-05T06:58:11Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:11Z reason:FailedMount]}" time="2025-11-05T06:58:11Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:11Z reason:FailedMount]}" time="2025-11-05T06:58:11Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:11Z reason:NetworkNotReady]}" time="2025-11-05T06:58:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:13Z reason:NetworkNotReady]}" time="2025-11-05T06:58:15Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:15Z reason:FailedMount]}" time="2025-11-05T06:58:15Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:15Z reason:FailedMount]}" time="2025-11-05T06:58:15Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:15Z reason:FailedMount]}" time="2025-11-05T06:58:15Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:15Z reason:FailedMount]}" time="2025-11-05T06:58:15Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T06:58:07Z lastTimestamp:2025-11-05T06:58:15Z reason:NetworkNotReady]}" time="2025-11-05T06:58:21Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:064786e2fe namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.3:10303/healthz\": dial tcp 10.0.128.3:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:58:21Z lastTimestamp:2025-11-05T06:58:21Z reason:ProbeError]}" time="2025-11-05T06:58:21Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e172d2e44c namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.3:10303/healthz\": dial tcp 10.0.128.3:10303: connect: connection refused map[firstTimestamp:2025-11-05T06:58:21Z lastTimestamp:2025-11-05T06:58:21Z reason:Unhealthy]}" time="2025-11-05T06:58:22Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:416a528720 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.3:10300/healthz\": dial tcp 10.0.128.3:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T06:58:22Z lastTimestamp:2025-11-05T06:58:22Z reason:ProbeError]}" time="2025-11-05T06:58:22Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:68683c9410 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.3:10300/healthz\": dial tcp 10.0.128.3:10300: connect: connection refused map[firstTimestamp:2025-11-05T06:58:22Z lastTimestamp:2025-11-05T06:58:22Z reason:Unhealthy]}" I1105 06:58:23.869374 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:58:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:cd29a577c1 namespace:openshift-e2e-loki pod:loki-promtail-4k6zx]}" message="{AddedInterface Add eth0 [10.131.0.3/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T06:58:24Z lastTimestamp:2025-11-05T06:58:24Z reason:AddedInterface]}" time="2025-11-05T06:58:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1769ebd414 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Container image \"quay.io/openshift-logging/promtail:v2.9.8\" already present on machine map[container:promtail firstTimestamp:2025-11-05T06:58:24Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T06:58:24Z reason:Pulled]}" time="2025-11-05T06:58:25Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:3c6ea329ab namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 map[count:3 firstTimestamp:2025-11-05T06:55:48Z lastTimestamp:2025-11-05T06:58:25Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T06:58:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T06:58:25Z lastTimestamp:2025-11-05T06:58:25Z reason:Created]}" time="2025-11-05T06:58:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T06:58:25Z lastTimestamp:2025-11-05T06:58:25Z reason:Started]}" time="2025-11-05T06:58:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ce1ec925c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Container image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" already present on machine map[container:oauth-proxy firstTimestamp:2025-11-05T06:58:25Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T06:58:25Z reason:Pulled]}" time="2025-11-05T06:58:26Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:3c6ea329ab namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 map[count:4 firstTimestamp:2025-11-05T06:55:48Z lastTimestamp:2025-11-05T06:58:26Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T06:58:26Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T06:58:26Z lastTimestamp:2025-11-05T06:58:26Z reason:Created]}" time="2025-11-05T06:58:26Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T06:58:26Z lastTimestamp:2025-11-05T06:58:26Z reason:Started]}" time="2025-11-05T06:58:26Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T06:58:26Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T06:58:26Z reason:Pulling]}" time="2025-11-05T06:58:26Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:4fa2c4aca7 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 655ms (655ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T06:58:26Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T06:58:26Z reason:Pulled]}" time="2025-11-05T06:58:26Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T06:58:26Z lastTimestamp:2025-11-05T06:58:26Z reason:Created]}" time="2025-11-05T06:58:26Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T06:58:26Z lastTimestamp:2025-11-05T06:58:26Z reason:Started]}" passed: (5m25s) 2025-11-05T06:58:31 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:PinnedImages][Disruptive] [Slow]All Nodes in a custom Pool should have the PinnedImages even after Garbage Collection [apigroup:machineconfiguration.openshift.io] [Serial]" started: 22/50/55 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:ManagedBootImagesvSphere][Serial] Should not update boot images on any MachineSet when not configured [apigroup:machineconfiguration.openshift.io]" skip [github.com/openshift/origin/test/extended/machine_config/helpers.go:56]: This test only applies to VSphere platform skipped: (5.9s) 2025-11-05T06:58:38 "[Suite:openshift/machine-config-operator/disruptive][sig-mco][OCPFeatureGate:ManagedBootImagesvSphere][Serial] Should not update boot images on any MachineSet when not configured [apigroup:machineconfiguration.openshift.io]" started: 22/51/55 "[sig-etcd][Feature:DisasterRecovery][Suite:openshift/etcd/recovery][Timeout:30m] [Feature:EtcdRecovery][Disruptive] Restore snapshot from node on another single unhealthy node [Serial]" I1105 06:59:24.140231 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T06:59:31Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:e906e5a2d4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(69e5fe6fed763193e9a899089b8769e7) map[firstTimestamp:2025-11-05T06:59:31Z lastTimestamp:2025-11-05T06:59:31Z reason:BackOff]}" time="2025-11-05T06:59:34Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:e906e5a2d4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(69e5fe6fed763193e9a899089b8769e7) map[count:2 firstTimestamp:2025-11-05T06:59:31Z lastTimestamp:2025-11-05T06:59:34Z reason:BackOff]}" time="2025-11-05T06:59:36Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:37Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:37Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:90427cd033 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:4 firstTimestamp:2025-11-05T06:36:12Z lastTimestamp:2025-11-05T06:59:37Z reason:ProbeError]}" time="2025-11-05T06:59:37Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:5a2023a0f5 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers) map[count:4 firstTimestamp:2025-11-05T06:36:12Z lastTimestamp:2025-11-05T06:59:37Z reason:Unhealthy]}" time="2025-11-05T06:59:37Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e4b8949ef4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused\nbody: \n map[count:11 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:59:37Z reason:ProbeError]}" time="2025-11-05T06:59:37Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:f4d35b79a3 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": dial tcp 10.0.0.7:9980: connect: connection refused map[count:11 firstTimestamp:2025-11-05T06:35:12Z lastTimestamp:2025-11-05T06:59:37Z reason:Unhealthy]}" time="2025-11-05T06:59:38Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:39Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:40Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:41Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:41Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:773222eaca namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 503 map[firstTimestamp:2025-11-05T06:59:41Z lastTimestamp:2025-11-05T06:59:41Z reason:Unhealthy]}" time="2025-11-05T06:59:42Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:43Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:44Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:45Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:46Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:47Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:48Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:48Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-567c95b6d8-t5fll]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T06:59:49Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:49Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-whkvl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T06:59:49Z lastTimestamp:2025-11-05T06:59:49Z reason:Unhealthy]}" time="2025-11-05T06:59:50Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:51Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:52Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:53Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:54Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:54Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-whkvl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T06:59:49Z lastTimestamp:2025-11-05T06:59:54Z reason:Unhealthy]}" time="2025-11-05T06:59:55Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:56Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:57Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:58Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:59Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T06:59:59Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-whkvl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T06:59:49Z lastTimestamp:2025-11-05T06:59:59Z reason:Unhealthy]}" time="2025-11-05T07:00:00Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:00:01Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:00:02Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:00:03Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:00:04Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:00:04Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-whkvl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T06:59:49Z lastTimestamp:2025-11-05T07:00:04Z reason:Unhealthy]}" time="2025-11-05T07:00:05Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:00:06Z" level=error msg="pod logged an error: the server could not find the requested resource ( pods/log etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0)" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:00:09Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-whkvl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T06:59:49Z lastTimestamp:2025-11-05T07:00:09Z reason:Unhealthy]}" time="2025-11-05T07:00:12Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:e906e5a2d4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(69e5fe6fed763193e9a899089b8769e7) map[count:3 firstTimestamp:2025-11-05T06:59:31Z lastTimestamp:2025-11-05T07:00:12Z reason:BackOff]}" time="2025-11-05T07:00:13Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:e906e5a2d4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(69e5fe6fed763193e9a899089b8769e7) map[count:4 firstTimestamp:2025-11-05T06:59:31Z lastTimestamp:2025-11-05T07:00:13Z reason:BackOff]}" time="2025-11-05T07:00:14Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:e906e5a2d4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(69e5fe6fed763193e9a899089b8769e7) map[count:5 firstTimestamp:2025-11-05T06:59:31Z lastTimestamp:2025-11-05T07:00:14Z reason:BackOff]}" time="2025-11-05T07:00:14Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-whkvl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T06:59:49Z lastTimestamp:2025-11-05T07:00:14Z reason:Unhealthy]}" time="2025-11-05T07:00:17Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:90427cd033 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[count:5 firstTimestamp:2025-11-05T06:36:12Z lastTimestamp:2025-11-05T07:00:17Z reason:ProbeError]}" time="2025-11-05T07:00:19Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-whkvl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T06:59:49Z lastTimestamp:2025-11-05T07:00:19Z reason:Unhealthy]}" I1105 07:00:24.398665 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:00:24Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-whkvl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T06:59:49Z lastTimestamp:2025-11-05T07:00:24Z reason:Unhealthy]}" time="2025-11-05T07:00:29Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-whkvl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T06:59:49Z lastTimestamp:2025-11-05T07:00:29Z reason:Unhealthy]}" time="2025-11-05T07:00:29Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:e906e5a2d4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(69e5fe6fed763193e9a899089b8769e7) map[count:6 firstTimestamp:2025-11-05T06:59:31Z lastTimestamp:2025-11-05T07:00:29Z reason:BackOff]}" time="2025-11-05T07:00:30Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:e906e5a2d4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(69e5fe6fed763193e9a899089b8769e7) map[count:7 firstTimestamp:2025-11-05T06:59:31Z lastTimestamp:2025-11-05T07:00:30Z reason:BackOff]}" time="2025-11-05T07:00:31Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-5b56cb464d-9s4xz]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:00:32Z" level=info msg="event interval matches AllowBackOffRestartingFailedContainer" locator="{Kind map[hmsg:e906e5a2d4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(69e5fe6fed763193e9a899089b8769e7) map[count:8 firstTimestamp:2025-11-05T06:59:31Z lastTimestamp:2025-11-05T07:00:32Z reason:BackOff]}" time="2025-11-05T07:00:34Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-whkvl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T06:59:49Z lastTimestamp:2025-11-05T07:00:34Z reason:Unhealthy]}" time="2025-11-05T07:00:35Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b9bb8d494-fl2hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:00:35Z lastTimestamp:2025-11-05T07:00:35Z reason:Unhealthy]}" time="2025-11-05T07:00:39Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:767665f2e8 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-whkvl]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.157:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.157:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:00:39Z lastTimestamp:2025-11-05T07:00:39Z reason:ProbeError]}" time="2025-11-05T07:00:39Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:ad7f086254 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-78bc654c8b-whkvl]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.157:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.157:8443: connect: connection refused map[firstTimestamp:2025-11-05T07:00:39Z lastTimestamp:2025-11-05T07:00:39Z reason:Unhealthy]}" time="2025-11-05T07:00:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b9bb8d494-fl2hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:00:35Z lastTimestamp:2025-11-05T07:00:40Z reason:Unhealthy]}" time="2025-11-05T07:00:43Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:e906e5a2d4 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{BackOff Back-off restarting failed container etcd in pod etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0_openshift-etcd(69e5fe6fed763193e9a899089b8769e7) map[count:9 firstTimestamp:2025-11-05T06:59:31Z lastTimestamp:2025-11-05T07:00:43Z reason:BackOff]}" time="2025-11-05T07:00:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b9bb8d494-fl2hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:00:35Z lastTimestamp:2025-11-05T07:00:45Z reason:Unhealthy]}" time="2025-11-05T07:00:46Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-567c95b6d8-cvklh]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:00:47Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-mkwvr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:00:47Z lastTimestamp:2025-11-05T07:00:47Z reason:Unhealthy]}" time="2025-11-05T07:00:50Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b9bb8d494-fl2hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:00:35Z lastTimestamp:2025-11-05T07:00:50Z reason:Unhealthy]}" time="2025-11-05T07:00:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-mkwvr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:00:47Z lastTimestamp:2025-11-05T07:00:52Z reason:Unhealthy]}" time="2025-11-05T07:00:55Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b9bb8d494-fl2hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:00:35Z lastTimestamp:2025-11-05T07:00:55Z reason:Unhealthy]}" time="2025-11-05T07:00:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-mkwvr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:00:47Z lastTimestamp:2025-11-05T07:00:57Z reason:Unhealthy]}" time="2025-11-05T07:01:00Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b9bb8d494-fl2hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:00:35Z lastTimestamp:2025-11-05T07:01:00Z reason:Unhealthy]}" time="2025-11-05T07:01:02Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-mkwvr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:00:47Z lastTimestamp:2025-11-05T07:01:02Z reason:Unhealthy]}" time="2025-11-05T07:01:05Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b9bb8d494-fl2hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:00:35Z lastTimestamp:2025-11-05T07:01:05Z reason:Unhealthy]}" time="2025-11-05T07:01:07Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-mkwvr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:00:47Z lastTimestamp:2025-11-05T07:01:07Z reason:Unhealthy]}" time="2025-11-05T07:01:10Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b9bb8d494-fl2hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T07:00:35Z lastTimestamp:2025-11-05T07:01:10Z reason:Unhealthy]}" time="2025-11-05T07:01:11Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:11Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:11Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:40 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T07:01:11Z reason:ProbeError]}" time="2025-11-05T07:01:11Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:40 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T07:01:11Z reason:Unhealthy]}" time="2025-11-05T07:01:12Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:12Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-mkwvr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:00:47Z lastTimestamp:2025-11-05T07:01:12Z reason:Unhealthy]}" time="2025-11-05T07:01:12Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Node map[hmsg:6856f57bfe node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{ConnectivityOutageDetected Connectivity outage detected: network-check-target-service-cluster: failed to establish a TCP connection to network-check-target:80: dial tcp 172.30.138.251:80: connect: connection refused map[firstTimestamp:2025-11-05T07:01:12Z lastTimestamp:2025-11-05T07:01:12Z reason:ConnectivityOutageDetected roles:worker]}" time="2025-11-05T07:01:13Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:14Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:15Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b9bb8d494-fl2hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T07:00:35Z lastTimestamp:2025-11-05T07:01:15Z reason:Unhealthy]}" time="2025-11-05T07:01:16Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:16Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:41 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T07:01:16Z reason:ProbeError]}" time="2025-11-05T07:01:16Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:41 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T07:01:16Z reason:Unhealthy]}" time="2025-11-05T07:01:17Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:17Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-mkwvr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:00:47Z lastTimestamp:2025-11-05T07:01:17Z reason:Unhealthy]}" time="2025-11-05T07:01:18Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:19Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:20Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b9bb8d494-fl2hr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T07:00:35Z lastTimestamp:2025-11-05T07:01:20Z reason:Unhealthy]}" time="2025-11-05T07:01:21Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:21Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:42 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T07:01:21Z reason:ProbeError]}" time="2025-11-05T07:01:21Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d662c5e307 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused map[count:42 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T07:01:21Z reason:Unhealthy]}" time="2025-11-05T07:01:21Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4de13e20fa namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10259/healthz\": dial tcp 10.0.0.5:10259: connect: connection refused\nbody: \n map[count:43 firstTimestamp:2025-11-05T04:15:03Z lastTimestamp:2025-11-05T07:01:21Z reason:ProbeError]}" time="2025-11-05T07:01:22Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:22Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-mkwvr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T07:00:47Z lastTimestamp:2025-11-05T07:01:22Z reason:Unhealthy]}" time="2025-11-05T07:01:22Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:f98b6f42c2 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused\nbody: \n map[count:52 firstTimestamp:2025-11-05T04:22:27Z lastTimestamp:2025-11-05T07:01:22Z reason:ProbeError]}" time="2025-11-05T07:01:22Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:7f6d64717b namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused map[count:52 firstTimestamp:2025-11-05T04:22:27Z lastTimestamp:2025-11-05T07:01:22Z reason:Unhealthy]}" time="2025-11-05T07:01:23Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:24Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-567c95b6d8-cvklh]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:01:24Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-5b56cb464d-9s4xz]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:01:24Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" I1105 07:01:24.672799 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:01:25Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:25Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:fec7e9cdb0 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b9bb8d494-fl2hr]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.142:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.142:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:01:25Z lastTimestamp:2025-11-05T07:01:25Z reason:ProbeError]}" time="2025-11-05T07:01:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:502049f2b1 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b9bb8d494-fl2hr]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.142:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.142:8443: connect: connection refused map[firstTimestamp:2025-11-05T07:01:25Z lastTimestamp:2025-11-05T07:01:25Z reason:Unhealthy]}" time="2025-11-05T07:01:26Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:27Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-mkwvr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T07:00:47Z lastTimestamp:2025-11-05T07:01:27Z reason:Unhealthy]}" time="2025-11-05T07:01:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:f98b6f42c2 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused\nbody: \n map[count:53 firstTimestamp:2025-11-05T04:22:27Z lastTimestamp:2025-11-05T07:01:27Z reason:ProbeError]}" time="2025-11-05T07:01:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:7f6d64717b namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused map[count:53 firstTimestamp:2025-11-05T04:22:27Z lastTimestamp:2025-11-05T07:01:27Z reason:Unhealthy]}" time="2025-11-05T07:01:28Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:29Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:30Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:30Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:fec7e9cdb0 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b9bb8d494-fl2hr]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.142:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.142:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T07:01:25Z lastTimestamp:2025-11-05T07:01:30Z reason:ProbeError]}" time="2025-11-05T07:01:30Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:502049f2b1 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b9bb8d494-fl2hr]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.142:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.142:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T07:01:25Z lastTimestamp:2025-11-05T07:01:30Z reason:Unhealthy]}" time="2025-11-05T07:01:31Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:32Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-mkwvr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T07:00:47Z lastTimestamp:2025-11-05T07:01:32Z reason:Unhealthy]}" time="2025-11-05T07:01:32Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:f98b6f42c2 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused\nbody: \n map[count:54 firstTimestamp:2025-11-05T04:22:27Z lastTimestamp:2025-11-05T07:01:32Z reason:ProbeError]}" time="2025-11-05T07:01:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:7f6d64717b namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:10257/healthz\": dial tcp 10.0.0.5:10257: connect: connection refused map[count:54 firstTimestamp:2025-11-05T04:22:27Z lastTimestamp:2025-11-05T07:01:32Z reason:Unhealthy]}" time="2025-11-05T07:01:33Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:34Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:35Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:35Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:fec7e9cdb0 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b9bb8d494-fl2hr]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.142:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.142:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T07:01:25Z lastTimestamp:2025-11-05T07:01:35Z reason:ProbeError]}" time="2025-11-05T07:01:36Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:37Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:37Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4c867b4329 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-mkwvr]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.139:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.139:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:01:37Z lastTimestamp:2025-11-05T07:01:37Z reason:ProbeError]}" time="2025-11-05T07:01:37Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14f7279807 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-78bc654c8b-mkwvr]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.139:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.139:8443: connect: connection refused map[firstTimestamp:2025-11-05T07:01:37Z lastTimestamp:2025-11-05T07:01:37Z reason:Unhealthy]}" time="2025-11-05T07:01:38Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:39Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:40Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:41Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:42Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:43Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:44Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:45Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-567c95b6d8-grvpr]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:01:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-btw9k]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:01:45Z lastTimestamp:2025-11-05T07:01:45Z reason:Unhealthy]}" time="2025-11-05T07:01:45Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:46Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:46Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:91 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T07:01:46Z reason:ProbeError]}" time="2025-11-05T07:01:46Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:172 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T07:01:46Z reason:Unhealthy]}" time="2025-11-05T07:01:47Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/d7c8e216-f056-4872-b7fa-63ced34ef4a7 container/etcd mirror-uid/69e5fe6fed763193e9a899089b8769e7" time="2025-11-05T07:01:48Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:01:49Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:01:50Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-btw9k]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:01:45Z lastTimestamp:2025-11-05T07:01:50Z reason:Unhealthy]}" time="2025-11-05T07:01:50Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:01:51Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:01:51Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:92 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T07:01:51Z reason:ProbeError]}" time="2025-11-05T07:01:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:173 firstTimestamp:2025-11-05T04:19:24Z lastTimestamp:2025-11-05T07:01:51Z reason:Unhealthy]}" time="2025-11-05T07:01:52Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:01:55Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-btw9k]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:01:45Z lastTimestamp:2025-11-05T07:01:55Z reason:Unhealthy]}" time="2025-11-05T07:02:00Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-btw9k]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:01:45Z lastTimestamp:2025-11-05T07:02:00Z reason:Unhealthy]}" time="2025-11-05T07:02:05Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-btw9k]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:01:45Z lastTimestamp:2025-11-05T07:02:05Z reason:Unhealthy]}" time="2025-11-05T07:02:10Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-btw9k]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:01:45Z lastTimestamp:2025-11-05T07:02:10Z reason:Unhealthy]}" time="2025-11-05T07:02:14Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-78bc654c8b-n74cr]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:02:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-btw9k]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:01:45Z lastTimestamp:2025-11-05T07:02:15Z reason:Unhealthy]}" time="2025-11-05T07:02:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-btw9k]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T07:01:45Z lastTimestamp:2025-11-05T07:02:20Z reason:Unhealthy]}" I1105 07:02:24.929822 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:02:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-btw9k]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T07:01:45Z lastTimestamp:2025-11-05T07:02:25Z reason:Unhealthy]}" time="2025-11-05T07:02:27Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:9 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T07:02:27Z reason:ProbeError]}" time="2025-11-05T07:02:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:9 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T07:02:27Z reason:Unhealthy]}" time="2025-11-05T07:02:30Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-btw9k]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T07:01:45Z lastTimestamp:2025-11-05T07:02:30Z reason:Unhealthy]}" time="2025-11-05T07:02:32Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-5b56cb464d-l4cp9]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:02:32Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:10 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T07:02:32Z reason:ProbeError]}" time="2025-11-05T07:02:32Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:10 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T07:02:32Z reason:Unhealthy]}" time="2025-11-05T07:02:33Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-6f7b79fb8-njlhw]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:02:34Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b9bb8d494-j6jcn]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:02:34Z lastTimestamp:2025-11-05T07:02:34Z reason:Unhealthy]}" time="2025-11-05T07:02:35Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:a86de0c05f namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-btw9k]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.111:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.111:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:02:35Z lastTimestamp:2025-11-05T07:02:35Z reason:ProbeError]}" time="2025-11-05T07:02:35Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:f7eff0441f namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-78bc654c8b-btw9k]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.111:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.111:8443: connect: connection refused map[firstTimestamp:2025-11-05T07:02:35Z lastTimestamp:2025-11-05T07:02:35Z reason:Unhealthy]}" time="2025-11-05T07:02:37Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:11 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T07:02:37Z reason:ProbeError]}" time="2025-11-05T07:02:37Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:11 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T07:02:37Z reason:Unhealthy]}" time="2025-11-05T07:02:37Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:1ae1f85da7 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused\nbody: \n map[count:12 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T07:02:37Z reason:ProbeError]}" time="2025-11-05T07:02:37Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:14822247c0 namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10257/healthz\": dial tcp 10.0.0.7:10257: connect: connection refused map[count:12 firstTimestamp:2025-11-05T06:30:22Z lastTimestamp:2025-11-05T07:02:37Z reason:Unhealthy]}" time="2025-11-05T07:02:39Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b9bb8d494-j6jcn]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:02:34Z lastTimestamp:2025-11-05T07:02:39Z reason:Unhealthy]}" time="2025-11-05T07:02:43Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-78bc654c8b-nbp4t]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:02:44Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b9bb8d494-j6jcn]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:02:34Z lastTimestamp:2025-11-05T07:02:44Z reason:Unhealthy]}" time="2025-11-05T07:02:44Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-cvklh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:02:44Z lastTimestamp:2025-11-05T07:02:44Z reason:Unhealthy]}" time="2025-11-05T07:02:49Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b9bb8d494-j6jcn]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:02:34Z lastTimestamp:2025-11-05T07:02:49Z reason:Unhealthy]}" time="2025-11-05T07:02:49Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-cvklh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:02:44Z lastTimestamp:2025-11-05T07:02:49Z reason:Unhealthy]}" time="2025-11-05T07:02:54Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b9bb8d494-j6jcn]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:02:34Z lastTimestamp:2025-11-05T07:02:54Z reason:Unhealthy]}" time="2025-11-05T07:02:54Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-cvklh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:02:44Z lastTimestamp:2025-11-05T07:02:54Z reason:Unhealthy]}" time="2025-11-05T07:02:57Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:14 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T07:02:57Z reason:ProbeError]}" time="2025-11-05T07:02:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:14 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T07:02:57Z reason:Unhealthy]}" time="2025-11-05T07:02:59Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b9bb8d494-j6jcn]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:02:34Z lastTimestamp:2025-11-05T07:02:59Z reason:Unhealthy]}" time="2025-11-05T07:02:59Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-cvklh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:02:44Z lastTimestamp:2025-11-05T07:02:59Z reason:Unhealthy]}" time="2025-11-05T07:03:02Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:15 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T07:03:02Z reason:ProbeError]}" time="2025-11-05T07:03:02Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:15 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T07:03:02Z reason:Unhealthy]}" time="2025-11-05T07:03:04Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b9bb8d494-j6jcn]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:02:34Z lastTimestamp:2025-11-05T07:03:04Z reason:Unhealthy]}" time="2025-11-05T07:03:04Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-cvklh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:02:44Z lastTimestamp:2025-11-05T07:03:04Z reason:Unhealthy]}" time="2025-11-05T07:03:07Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2652c73da5 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused\nbody: \n map[count:16 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T07:03:07Z reason:ProbeError]}" time="2025-11-05T07:03:07Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:4816521475 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:10259/healthz\": dial tcp 10.0.0.7:10259: connect: connection refused map[count:16 firstTimestamp:2025-11-05T06:30:27Z lastTimestamp:2025-11-05T07:03:07Z reason:Unhealthy]}" time="2025-11-05T07:03:09Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b9bb8d494-j6jcn]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T07:02:34Z lastTimestamp:2025-11-05T07:03:09Z reason:Unhealthy]}" time="2025-11-05T07:03:09Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-cvklh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:02:44Z lastTimestamp:2025-11-05T07:03:09Z reason:Unhealthy]}" time="2025-11-05T07:03:14Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b9bb8d494-j6jcn]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T07:02:34Z lastTimestamp:2025-11-05T07:03:14Z reason:Unhealthy]}" time="2025-11-05T07:03:14Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-cvklh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:02:44Z lastTimestamp:2025-11-05T07:03:14Z reason:Unhealthy]}" time="2025-11-05T07:03:19Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b9bb8d494-j6jcn]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T07:02:34Z lastTimestamp:2025-11-05T07:03:19Z reason:Unhealthy]}" time="2025-11-05T07:03:19Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-cvklh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T07:02:44Z lastTimestamp:2025-11-05T07:03:19Z reason:Unhealthy]}" time="2025-11-05T07:03:24Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:de48f65afa namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b9bb8d494-j6jcn]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.156:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.156:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:03:24Z lastTimestamp:2025-11-05T07:03:24Z reason:ProbeError]}" time="2025-11-05T07:03:24Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:09be92ad33 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b9bb8d494-j6jcn]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.156:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.156:8443: connect: connection refused map[firstTimestamp:2025-11-05T07:03:24Z lastTimestamp:2025-11-05T07:03:24Z reason:Unhealthy]}" time="2025-11-05T07:03:24Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-cvklh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T07:02:44Z lastTimestamp:2025-11-05T07:03:24Z reason:Unhealthy]}" I1105 07:03:25.192713 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:03:29Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:de48f65afa namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b9bb8d494-j6jcn]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.156:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.156:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T07:03:24Z lastTimestamp:2025-11-05T07:03:29Z reason:ProbeError]}" time="2025-11-05T07:03:29Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:09be92ad33 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b9bb8d494-j6jcn]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.0.156:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.156:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T07:03:24Z lastTimestamp:2025-11-05T07:03:29Z reason:Unhealthy]}" time="2025-11-05T07:03:29Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-cvklh]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T07:02:44Z lastTimestamp:2025-11-05T07:03:29Z reason:Unhealthy]}" time="2025-11-05T07:03:34Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:de48f65afa namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-5b9bb8d494-j6jcn]}" message="{ProbeError Readiness probe error: Get \"https://10.129.0.156:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.129.0.156:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T07:03:24Z lastTimestamp:2025-11-05T07:03:34Z reason:ProbeError]}" time="2025-11-05T07:03:34Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5a3f723b66 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-cvklh]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.163:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.163:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:03:34Z lastTimestamp:2025-11-05T07:03:34Z reason:ProbeError]}" time="2025-11-05T07:03:34Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:222a1ee08c namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-567c95b6d8-cvklh]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.163:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.163:8443: connect: connection refused map[firstTimestamp:2025-11-05T07:03:34Z lastTimestamp:2025-11-05T07:03:34Z reason:Unhealthy]}" time="2025-11-05T07:03:38Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:65cd3c913f namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused\nbody: \n map[count:7 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T07:03:38Z reason:ProbeError]}" time="2025-11-05T07:03:38Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d94f36ceca namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused map[count:7 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T07:03:38Z reason:Unhealthy]}" time="2025-11-05T07:03:43Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:65cd3c913f namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused\nbody: \n map[count:8 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T07:03:43Z reason:ProbeError]}" time="2025-11-05T07:03:43Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d94f36ceca namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused map[count:8 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T07:03:43Z reason:Unhealthy]}" time="2025-11-05T07:03:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:65cd3c913f namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused\nbody: \n map[count:9 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T07:03:48Z reason:ProbeError]}" time="2025-11-05T07:03:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d94f36ceca namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused map[count:9 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T07:03:48Z reason:Unhealthy]}" time="2025-11-05T07:03:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:65cd3c913f namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused\nbody: \n map[count:10 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T07:03:48Z reason:ProbeError]}" time="2025-11-05T07:03:48Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d94f36ceca namespace:openshift-kube-controller-manager node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-controller-manager-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10257/healthz\": dial tcp 10.0.0.8:10257: connect: connection refused map[count:10 firstTimestamp:2025-11-05T05:40:23Z lastTimestamp:2025-11-05T07:03:48Z reason:Unhealthy]}" time="2025-11-05T07:03:52Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-oauth-apiserver pod:apiserver-78bc654c8b-v5bdl]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:03:56Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-567c95b6d8-t5fll]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:03:56Z lastTimestamp:2025-11-05T07:03:56Z reason:Unhealthy]}" time="2025-11-05T07:04:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-567c95b6d8-t5fll]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:03:56Z lastTimestamp:2025-11-05T07:04:01Z reason:Unhealthy]}" time="2025-11-05T07:04:03Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:04:04Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:04:04Z" level=info msg="event interval matches KubeAPIServerProgressingDuringSingleNodeUpgrade" locator="{Kind map[hmsg:90427cd033 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.7:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nbody: \n map[firstTimestamp:2025-11-05T07:04:04Z lastTimestamp:2025-11-05T07:04:04Z reason:ProbeError]}" time="2025-11-05T07:04:04Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:5a2023a0f5 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.7:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers) map[firstTimestamp:2025-11-05T07:04:04Z lastTimestamp:2025-11-05T07:04:04Z reason:Unhealthy]}" time="2025-11-05T07:04:05Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:04:06Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-567c95b6d8-t5fll]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:03:56Z lastTimestamp:2025-11-05T07:04:06Z reason:Unhealthy]}" time="2025-11-05T07:04:06Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:04:07Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:04:08Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:04:09Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:04:10Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:04:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-567c95b6d8-t5fll]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:03:56Z lastTimestamp:2025-11-05T07:04:11Z reason:Unhealthy]}" time="2025-11-05T07:04:11Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:04:12Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:04:13Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/068c3b7c-d158-4e9d-b2fb-8d41619c255c container/etcd mirror-uid/949e378dd6f0b41e05a1bb9729a25081" time="2025-11-05T07:04:14Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/119a62b9-3ecc-41bd-95e4-7a04d57bb44a container/etcd mirror-uid/39db537683dd5825fa1369e7a0a03ec0" time="2025-11-05T07:04:15Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/119a62b9-3ecc-41bd-95e4-7a04d57bb44a container/etcd mirror-uid/39db537683dd5825fa1369e7a0a03ec0" time="2025-11-05T07:04:16Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/119a62b9-3ecc-41bd-95e4-7a04d57bb44a container/etcd mirror-uid/39db537683dd5825fa1369e7a0a03ec0" time="2025-11-05T07:04:16Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-567c95b6d8-t5fll]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:03:56Z lastTimestamp:2025-11-05T07:04:16Z reason:Unhealthy]}" time="2025-11-05T07:04:17Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/119a62b9-3ecc-41bd-95e4-7a04d57bb44a container/etcd mirror-uid/39db537683dd5825fa1369e7a0a03ec0" time="2025-11-05T07:04:18Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 uid/119a62b9-3ecc-41bd-95e4-7a04d57bb44a container/etcd mirror-uid/39db537683dd5825fa1369e7a0a03ec0" time="2025-11-05T07:04:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-567c95b6d8-t5fll]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:03:56Z lastTimestamp:2025-11-05T07:04:21Z reason:Unhealthy]}" I1105 07:04:25.492483 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:04:26Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-567c95b6d8-t5fll]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:03:56Z lastTimestamp:2025-11-05T07:04:26Z reason:Unhealthy]}" time="2025-11-05T07:04:31Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-567c95b6d8-t5fll]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T07:03:56Z lastTimestamp:2025-11-05T07:04:31Z reason:Unhealthy]}" time="2025-11-05T07:04:33Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-6f7b79fb8-qn584]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:04:36Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-567c95b6d8-t5fll]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T07:03:56Z lastTimestamp:2025-11-05T07:04:36Z reason:Unhealthy]}" time="2025-11-05T07:04:36Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b9bb8d494-sp494]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:04:36Z lastTimestamp:2025-11-05T07:04:36Z reason:Unhealthy]}" time="2025-11-05T07:04:41Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-oauth-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:apiserver-567c95b6d8-t5fll]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T07:03:56Z lastTimestamp:2025-11-05T07:04:41Z reason:Unhealthy]}" time="2025-11-05T07:04:41Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b9bb8d494-sp494]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:04:36Z lastTimestamp:2025-11-05T07:04:41Z reason:Unhealthy]}" time="2025-11-05T07:04:46Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:c65172b9bf namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:6443/readyz\": dial tcp 10.0.0.5:6443: connect: connection refused\nbody: \n map[count:24 firstTimestamp:2025-11-05T04:20:34Z lastTimestamp:2025-11-05T07:04:46Z reason:ProbeError]}" time="2025-11-05T07:04:46Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b9bb8d494-sp494]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:04:36Z lastTimestamp:2025-11-05T07:04:46Z reason:Unhealthy]}" time="2025-11-05T07:04:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b9bb8d494-sp494]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:04:36Z lastTimestamp:2025-11-05T07:04:51Z reason:Unhealthy]}" time="2025-11-05T07:04:56Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b9bb8d494-sp494]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:04:36Z lastTimestamp:2025-11-05T07:04:56Z reason:Unhealthy]}" time="2025-11-05T07:04:58Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:5d07821b69 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused\nbody: \n map[count:10 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T07:04:58Z reason:ProbeError]}" time="2025-11-05T07:04:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d07f8fa06c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused map[count:10 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T07:04:58Z reason:Unhealthy]}" time="2025-11-05T07:05:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b9bb8d494-sp494]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:04:36Z lastTimestamp:2025-11-05T07:05:01Z reason:Unhealthy]}" time="2025-11-05T07:05:03Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5d07821b69 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused\nbody: \n map[count:11 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T07:05:03Z reason:ProbeError]}" time="2025-11-05T07:05:03Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d07f8fa06c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused map[count:11 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T07:05:03Z reason:Unhealthy]}" time="2025-11-05T07:05:06Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b9bb8d494-sp494]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:04:36Z lastTimestamp:2025-11-05T07:05:06Z reason:Unhealthy]}" time="2025-11-05T07:05:08Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5d07821b69 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused\nbody: \n map[count:12 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T07:05:08Z reason:ProbeError]}" time="2025-11-05T07:05:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d07f8fa06c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused map[count:12 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T07:05:08Z reason:Unhealthy]}" time="2025-11-05T07:05:08Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5d07821b69 namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused\nbody: \n map[count:13 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T07:05:08Z reason:ProbeError]}" time="2025-11-05T07:05:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:d07f8fa06c namespace:openshift-kube-scheduler node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:openshift-kube-scheduler-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:10259/healthz\": dial tcp 10.0.0.8:10259: connect: connection refused map[count:13 firstTimestamp:2025-11-05T05:40:28Z lastTimestamp:2025-11-05T07:05:08Z reason:Unhealthy]}" time="2025-11-05T07:05:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b9bb8d494-sp494]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T07:04:36Z lastTimestamp:2025-11-05T07:05:11Z reason:Unhealthy]}" time="2025-11-05T07:05:16Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b9bb8d494-sp494]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T07:04:36Z lastTimestamp:2025-11-05T07:05:16Z reason:Unhealthy]}" time="2025-11-05T07:05:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b9bb8d494-sp494]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T07:04:36Z lastTimestamp:2025-11-05T07:05:21Z reason:Unhealthy]}" I1105 07:05:25.737947 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:05:26Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:86fc4baaac namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b9bb8d494-sp494]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.113:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.113:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:05:26Z lastTimestamp:2025-11-05T07:05:26Z reason:ProbeError]}" time="2025-11-05T07:05:26Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:4b25b56630 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b9bb8d494-sp494]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.113:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.113:8443: connect: connection refused map[firstTimestamp:2025-11-05T07:05:26Z lastTimestamp:2025-11-05T07:05:26Z reason:Unhealthy]}" time="2025-11-05T07:05:31Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:86fc4baaac namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b9bb8d494-sp494]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.113:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.113:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T07:05:26Z lastTimestamp:2025-11-05T07:05:31Z reason:ProbeError]}" time="2025-11-05T07:05:31Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4b25b56630 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b9bb8d494-sp494]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.2.113:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.113:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T07:05:26Z lastTimestamp:2025-11-05T07:05:31Z reason:Unhealthy]}" time="2025-11-05T07:05:36Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:86fc4baaac namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:apiserver-5b9bb8d494-sp494]}" message="{ProbeError Readiness probe error: Get \"https://10.131.2.113:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.131.2.113:8443: connect: connection refused\nbody: \n map[count:3 firstTimestamp:2025-11-05T07:05:26Z lastTimestamp:2025-11-05T07:05:36Z reason:ProbeError]}" time="2025-11-05T07:05:53Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[count:250 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T07:05:53Z reason:ProbeError]}" time="2025-11-05T07:05:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:feccdf558f namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused map[count:250 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T07:05:53Z reason:Unhealthy]}" time="2025-11-05T07:05:58Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[count:251 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T07:05:58Z reason:ProbeError]}" time="2025-11-05T07:05:58Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:feccdf558f namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused map[count:251 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T07:05:58Z reason:Unhealthy]}" time="2025-11-05T07:06:03Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:24ee800145 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-2]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.5:9980/readyz\": dial tcp 10.0.0.5:9980: connect: connection refused\nbody: \n map[count:252 firstTimestamp:2025-11-05T04:21:08Z lastTimestamp:2025-11-05T07:06:03Z reason:ProbeError]}" time="2025-11-05T07:06:21Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/0fc16ba7-91ef-4ef6-aad9-f153ad509ff2 container/etcd mirror-uid/ab375631154327a1ec5a1ec01d416109" time="2025-11-05T07:06:22Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/0fc16ba7-91ef-4ef6-aad9-f153ad509ff2 container/etcd mirror-uid/ab375631154327a1ec5a1ec01d416109" time="2025-11-05T07:06:23Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/0fc16ba7-91ef-4ef6-aad9-f153ad509ff2 container/etcd mirror-uid/ab375631154327a1ec5a1ec01d416109" time="2025-11-05T07:06:24Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/0fc16ba7-91ef-4ef6-aad9-f153ad509ff2 container/etcd mirror-uid/ab375631154327a1ec5a1ec01d416109" time="2025-11-05T07:06:25Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/0fc16ba7-91ef-4ef6-aad9-f153ad509ff2 container/etcd mirror-uid/ab375631154327a1ec5a1ec01d416109" I1105 07:06:26.010547 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:06:26Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/0fc16ba7-91ef-4ef6-aad9-f153ad509ff2 container/etcd mirror-uid/ab375631154327a1ec5a1ec01d416109" time="2025-11-05T07:06:27Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/0fc16ba7-91ef-4ef6-aad9-f153ad509ff2 container/etcd mirror-uid/ab375631154327a1ec5a1ec01d416109" time="2025-11-05T07:06:28Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/dbda3ad0-544a-40e0-aaf7-a9788a5ae124 container/etcd mirror-uid/83e86049816ca058879d559e1e9b00ca" time="2025-11-05T07:06:29Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/dbda3ad0-544a-40e0-aaf7-a9788a5ae124 container/etcd mirror-uid/83e86049816ca058879d559e1e9b00ca" time="2025-11-05T07:06:29Z" level=info msg="event interval matches FailedScheduling" locator="{Kind map[hmsg:d787610ddc namespace:openshift-apiserver pod:apiserver-6f7b79fb8-zr7p7]}" message="{FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:06:30Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/dbda3ad0-544a-40e0-aaf7-a9788a5ae124 container/etcd mirror-uid/83e86049816ca058879d559e1e9b00ca" time="2025-11-05T07:06:31Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/dbda3ad0-544a-40e0-aaf7-a9788a5ae124 container/etcd mirror-uid/83e86049816ca058879d559e1e9b00ca" time="2025-11-05T07:06:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b56cb464d-9s4xz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:06:32Z lastTimestamp:2025-11-05T07:06:32Z reason:Unhealthy]}" time="2025-11-05T07:06:32Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-2 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-2 uid/dbda3ad0-544a-40e0-aaf7-a9788a5ae124 container/etcd mirror-uid/83e86049816ca058879d559e1e9b00ca" time="2025-11-05T07:06:32Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:16 firstTimestamp:2025-11-05T06:41:32Z lastTimestamp:2025-11-05T07:06:32Z reason:ProbeError]}" time="2025-11-05T07:06:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:32 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T07:06:32Z reason:Unhealthy]}" time="2025-11-05T07:06:37Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b56cb464d-9s4xz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:06:32Z lastTimestamp:2025-11-05T07:06:37Z reason:Unhealthy]}" time="2025-11-05T07:06:37Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:17 firstTimestamp:2025-11-05T06:41:32Z lastTimestamp:2025-11-05T07:06:37Z reason:ProbeError]}" time="2025-11-05T07:06:37Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:33 firstTimestamp:2025-11-05T06:32:07Z lastTimestamp:2025-11-05T07:06:37Z reason:Unhealthy]}" time="2025-11-05T07:06:42Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b56cb464d-9s4xz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:06:32Z lastTimestamp:2025-11-05T07:06:42Z reason:Unhealthy]}" time="2025-11-05T07:06:47Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b56cb464d-9s4xz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:06:32Z lastTimestamp:2025-11-05T07:06:47Z reason:Unhealthy]}" time="2025-11-05T07:06:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b56cb464d-9s4xz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:06:32Z lastTimestamp:2025-11-05T07:06:52Z reason:Unhealthy]}" time="2025-11-05T07:06:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b56cb464d-9s4xz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:06:32Z lastTimestamp:2025-11-05T07:06:57Z reason:Unhealthy]}" time="2025-11-05T07:07:02Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b56cb464d-9s4xz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:06:32Z lastTimestamp:2025-11-05T07:07:02Z reason:Unhealthy]}" time="2025-11-05T07:07:07Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b56cb464d-9s4xz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T07:06:32Z lastTimestamp:2025-11-05T07:07:07Z reason:Unhealthy]}" time="2025-11-05T07:07:07Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:24 firstTimestamp:2025-11-05T06:41:32Z lastTimestamp:2025-11-05T07:07:07Z reason:ProbeError]}" time="2025-11-05T07:07:12Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b56cb464d-9s4xz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:9 firstTimestamp:2025-11-05T07:06:32Z lastTimestamp:2025-11-05T07:07:12Z reason:Unhealthy]}" time="2025-11-05T07:07:17Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b56cb464d-9s4xz]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:10 firstTimestamp:2025-11-05T07:06:32Z lastTimestamp:2025-11-05T07:07:17Z reason:Unhealthy]}" time="2025-11-05T07:07:22Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4ff582562b namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b56cb464d-9s4xz]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.165:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.165:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:07:22Z lastTimestamp:2025-11-05T07:07:22Z reason:ProbeError]}" time="2025-11-05T07:07:22Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:830491a705 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b56cb464d-9s4xz]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.165:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.165:8443: connect: connection refused map[firstTimestamp:2025-11-05T07:07:22Z lastTimestamp:2025-11-05T07:07:22Z reason:Unhealthy]}" I1105 07:07:26.507202 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:07:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4ff582562b namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b56cb464d-9s4xz]}" message="{ProbeError Readiness probe error: Get \"https://10.130.2.165:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.165:8443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T07:07:22Z lastTimestamp:2025-11-05T07:07:27Z reason:ProbeError]}" time="2025-11-05T07:07:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:830491a705 namespace:openshift-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:apiserver-5b56cb464d-9s4xz]}" message="{Unhealthy Readiness probe failed: Get \"https://10.130.2.165:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.130.2.165:8443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T07:07:22Z lastTimestamp:2025-11-05T07:07:27Z reason:Unhealthy]}" time="2025-11-05T07:08:14Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:143 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T07:08:14Z reason:ProbeError]}" time="2025-11-05T07:08:14Z" level=info msg="event interval matches EtcdReadinessProbeFailuresPerRevisionChange" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:143 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T07:08:14Z reason:Unhealthy]}" time="2025-11-05T07:08:19Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:144 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T07:08:19Z reason:ProbeError]}" time="2025-11-05T07:08:19Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:bd88ad9d7d namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused map[count:144 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T07:08:19Z reason:Unhealthy]}" time="2025-11-05T07:08:24Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:145 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T07:08:24Z reason:ProbeError]}" I1105 07:08:26.768941 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:08:43Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4c0a10b8-2160-4920-a6e0-08708be67bfc container/etcd mirror-uid/1722166c307c85ad5842516eecf65990" time="2025-11-05T07:08:43Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4c0a10b8-2160-4920-a6e0-08708be67bfc container/etcd mirror-uid/1722166c307c85ad5842516eecf65990" time="2025-11-05T07:08:44Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4c0a10b8-2160-4920-a6e0-08708be67bfc container/etcd mirror-uid/1722166c307c85ad5842516eecf65990" time="2025-11-05T07:08:45Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4c0a10b8-2160-4920-a6e0-08708be67bfc container/etcd mirror-uid/1722166c307c85ad5842516eecf65990" time="2025-11-05T07:08:46Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4c0a10b8-2160-4920-a6e0-08708be67bfc container/etcd mirror-uid/1722166c307c85ad5842516eecf65990" time="2025-11-05T07:08:47Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4c0a10b8-2160-4920-a6e0-08708be67bfc container/etcd mirror-uid/1722166c307c85ad5842516eecf65990" time="2025-11-05T07:08:48Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4c0a10b8-2160-4920-a6e0-08708be67bfc container/etcd mirror-uid/1722166c307c85ad5842516eecf65990" time="2025-11-05T07:08:49Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4c0a10b8-2160-4920-a6e0-08708be67bfc container/etcd mirror-uid/1722166c307c85ad5842516eecf65990" time="2025-11-05T07:08:50Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is not available" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/4c0a10b8-2160-4920-a6e0-08708be67bfc container/etcd mirror-uid/1722166c307c85ad5842516eecf65990" time="2025-11-05T07:08:51Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/217ede63-19d0-439c-9560-14631ad4a183 container/etcd mirror-uid/6ab21dbf8cb18a6df85d78b9a78150f9" time="2025-11-05T07:08:52Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/217ede63-19d0-439c-9560-14631ad4a183 container/etcd mirror-uid/6ab21dbf8cb18a6df85d78b9a78150f9" time="2025-11-05T07:08:53Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/217ede63-19d0-439c-9560-14631ad4a183 container/etcd mirror-uid/6ab21dbf8cb18a6df85d78b9a78150f9" time="2025-11-05T07:08:54Z" level=info msg="event interval matches EtcdReadinessProbeError" locator="{Kind map[hmsg:736824c810 namespace:openshift-etcd node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:etcd-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: Get \"https://10.0.0.8:9980/readyz\": dial tcp 10.0.0.8:9980: connect: connection refused\nbody: \n map[count:152 firstTimestamp:2025-11-05T05:43:53Z lastTimestamp:2025-11-05T07:08:54Z reason:ProbeError]}" time="2025-11-05T07:08:54Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/217ede63-19d0-439c-9560-14631ad4a183 container/etcd mirror-uid/6ab21dbf8cb18a6df85d78b9a78150f9" time="2025-11-05T07:08:55Z" level=error msg="pod logged an error: container \"etcd\" in pod \"etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1\" is waiting to start: PodInitializing" component=PodsStreamer locator="namespace/openshift-etcd node/ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod/etcd-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 uid/217ede63-19d0-439c-9560-14631ad4a183 container/etcd mirror-uid/6ab21dbf8cb18a6df85d78b9a78150f9" I1105 07:09:27.036025 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:09:38Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:31 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T07:09:38Z reason:ProbeError]}" time="2025-11-05T07:09:38Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:33 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T07:09:38Z reason:Unhealthy]}" time="2025-11-05T07:09:43Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:32 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T07:09:43Z reason:ProbeError]}" time="2025-11-05T07:09:43Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:34 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T07:09:43Z reason:Unhealthy]}" time="2025-11-05T07:09:48Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:33 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T07:09:48Z reason:ProbeError]}" time="2025-11-05T07:09:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:35 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T07:09:48Z reason:Unhealthy]}" time="2025-11-05T07:09:48Z" level=info msg="event interval matches KubeAPIReadinessProbeError" locator="{Kind map[hmsg:7a79dfc2d7 namespace:openshift-kube-apiserver node:ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1 pod:kube-apiserver-guard-ci-op-x0f88pwp-f3da4-d9fgd-master-n9mzx-1]}" message="{ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-api-request-count-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/storage-object-count-tracker-hook ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/start-system-namespaces-controller ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\n[+]poststarthook/start-legacy-token-tracking-controller ok\n[+]poststarthook/start-service-ip-repair-controllers ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-status-local-available-controller ok\n[+]poststarthook/apiservice-status-remote-available-controller ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/apiservice-discovery-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/apiservice-openapiv3-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n map[count:34 firstTimestamp:2025-11-05T05:46:43Z lastTimestamp:2025-11-05T07:09:48Z reason:ProbeError]}" I1105 07:10:27.323235 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 07:11:28.677663 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 07:12:28.943968 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 07:13:29.311997 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' passed: (15m5s) 2025-11-05T07:13:44 "[sig-etcd][Feature:DisasterRecovery][Suite:openshift/etcd/recovery][Timeout:30m] [Feature:EtcdRecovery][Disruptive] Restore snapshot from node on another single unhealthy node [Serial]" started: 22/52/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO ocb PolarionID:83140-A MachineOSConfig with custom containerfile definition can be successfully applied" time="2025-11-05T07:14:06Z" level=info msg="event interval matches MarketplaceStartupProbeFailure" locator="{Kind map[hmsg:d25e6fe1ef namespace:openshift-marketplace node:ci-op-x0f88pwp-f3da4-d9fgd-master-m2rxm-0 pod:redhat-operators-rx4gd]}" message="{Unhealthy Startup probe failed: timeout: failed to connect service \":50051\" within 1s\n map[firstTimestamp:2025-11-05T07:14:06Z lastTimestamp:2025-11-05T07:14:06Z reason:Unhealthy]}" I1105 07:14:29.577802 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 07:15:29.828787 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 07:16:30.082435 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 07:17:30.333010 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 07:18:30.558417 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 07:19:30.883427 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 07:20:31.139131 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 07:21:31.405356 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStarted - ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 - 2025-11-05T07:22:17Z I1105 07:22:31.677564 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStaged - ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 - 2025-11-05T07:23:09Z I1105 07:23:31.959041 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:23:58Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-4k6zx]}" message="{NodeNotReady Node is not ready map[firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:23:58Z reason:NodeNotReady]}" time="2025-11-05T07:23:58Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:23:58Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:23:58Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:2 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:23:58Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:24:23Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T07:24:23Z lastTimestamp:2025-11-05T07:24:23Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:24:23Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T07:24:23Z lastTimestamp:2025-11-05T07:24:23Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:24:23Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T07:24:23Z lastTimestamp:2025-11-05T07:24:23Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:24:23Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T07:24:23Z lastTimestamp:2025-11-05T07:24:23Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:24:23Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T07:24:23Z lastTimestamp:2025-11-05T07:24:23Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:24:23Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T07:24:23Z lastTimestamp:2025-11-05T07:24:23Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:24:24Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T07:24:23Z lastTimestamp:2025-11-05T07:24:23Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:24:24Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T07:24:23Z lastTimestamp:2025-11-05T07:24:23Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:24:24Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T07:24:23Z lastTimestamp:2025-11-05T07:24:23Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:24:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:24Z reason:NetworkNotReady]}" time="2025-11-05T07:24:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:24Z reason:FailedMount]}" time="2025-11-05T07:24:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:24Z reason:FailedMount]}" time="2025-11-05T07:24:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:24Z reason:FailedMount]}" time="2025-11-05T07:24:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:24Z reason:FailedMount]}" time="2025-11-05T07:24:25Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:3 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:24:24Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:24:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:25Z reason:FailedMount]}" time="2025-11-05T07:24:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:25Z reason:FailedMount]}" time="2025-11-05T07:24:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:25Z reason:FailedMount]}" time="2025-11-05T07:24:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:25Z reason:FailedMount]}" time="2025-11-05T07:24:26Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:26Z reason:FailedMount]}" time="2025-11-05T07:24:26Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:26Z reason:FailedMount]}" time="2025-11-05T07:24:26Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:26Z reason:FailedMount]}" time="2025-11-05T07:24:26Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:26Z reason:FailedMount]}" time="2025-11-05T07:24:26Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:26Z reason:NetworkNotReady]}" time="2025-11-05T07:24:28Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:28Z reason:FailedMount]}" time="2025-11-05T07:24:28Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:28Z reason:FailedMount]}" time="2025-11-05T07:24:28Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:28Z reason:FailedMount]}" time="2025-11-05T07:24:28Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:28Z reason:FailedMount]}" time="2025-11-05T07:24:28Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:28Z reason:NetworkNotReady]}" time="2025-11-05T07:24:30Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:30Z reason:NetworkNotReady]}" I1105 07:24:32.250704 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:24:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:32Z reason:FailedMount]}" time="2025-11-05T07:24:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:32Z reason:FailedMount]}" time="2025-11-05T07:24:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:32Z reason:FailedMount]}" time="2025-11-05T07:24:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:32Z reason:FailedMount]}" time="2025-11-05T07:24:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T07:24:24Z lastTimestamp:2025-11-05T07:24:32Z reason:NetworkNotReady]}" time="2025-11-05T07:24:39Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:064786e2fe namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.3:10303/healthz\": dial tcp 10.0.128.3:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:24:39Z lastTimestamp:2025-11-05T07:24:39Z reason:ProbeError]}" time="2025-11-05T07:24:39Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e172d2e44c namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.3:10303/healthz\": dial tcp 10.0.128.3:10303: connect: connection refused map[firstTimestamp:2025-11-05T07:24:39Z lastTimestamp:2025-11-05T07:24:39Z reason:Unhealthy]}" time="2025-11-05T07:24:41Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:cd29a577c1 namespace:openshift-e2e-loki pod:loki-promtail-4k6zx]}" message="{AddedInterface Add eth0 [10.131.0.3/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T07:24:40Z lastTimestamp:2025-11-05T07:24:40Z reason:AddedInterface]}" time="2025-11-05T07:24:41Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1769ebd414 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Container image \"quay.io/openshift-logging/promtail:v2.9.8\" already present on machine map[container:promtail firstTimestamp:2025-11-05T07:24:41Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T07:24:41Z reason:Pulled]}" time="2025-11-05T07:24:41Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:416a528720 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.3:10300/healthz\": dial tcp 10.0.128.3:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:24:41Z lastTimestamp:2025-11-05T07:24:41Z reason:ProbeError]}" time="2025-11-05T07:24:41Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:68683c9410 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.3:10300/healthz\": dial tcp 10.0.128.3:10300: connect: connection refused map[firstTimestamp:2025-11-05T07:24:41Z lastTimestamp:2025-11-05T07:24:41Z reason:Unhealthy]}" time="2025-11-05T07:24:42Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:3c6ea329ab namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 map[firstTimestamp:2025-11-05T07:24:42Z lastTimestamp:2025-11-05T07:24:42Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:24:42Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T07:24:42Z lastTimestamp:2025-11-05T07:24:42Z reason:Created]}" time="2025-11-05T07:24:42Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T07:24:42Z lastTimestamp:2025-11-05T07:24:42Z reason:Started]}" time="2025-11-05T07:24:42Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ce1ec925c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Container image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" already present on machine map[container:oauth-proxy firstTimestamp:2025-11-05T07:24:42Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T07:24:42Z reason:Pulled]}" time="2025-11-05T07:24:43Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:3c6ea329ab namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 map[count:2 firstTimestamp:2025-11-05T07:24:42Z lastTimestamp:2025-11-05T07:24:43Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:24:43Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T07:24:43Z lastTimestamp:2025-11-05T07:24:43Z reason:Created]}" time="2025-11-05T07:24:43Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T07:24:43Z lastTimestamp:2025-11-05T07:24:43Z reason:Started]}" time="2025-11-05T07:24:43Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T07:24:43Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T07:24:43Z reason:Pulling]}" time="2025-11-05T07:24:44Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:e617758ac8 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 748ms (748ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T07:24:44Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T07:24:44Z reason:Pulled]}" time="2025-11-05T07:24:44Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T07:24:44Z lastTimestamp:2025-11-05T07:24:44Z reason:Created]}" time="2025-11-05T07:24:44Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T07:24:44Z lastTimestamp:2025-11-05T07:24:44Z reason:Started]}" time="2025-11-05T07:25:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-m5tnx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:25:01Z lastTimestamp:2025-11-05T07:25:01Z reason:Unhealthy]}" time="2025-11-05T07:25:03Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:25:05Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-5pmtp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:25:05Z lastTimestamp:2025-11-05T07:25:05Z reason:Unhealthy]}" time="2025-11-05T07:25:16Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-5pmtp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:25:05Z lastTimestamp:2025-11-05T07:25:15Z reason:Unhealthy]}" time="2025-11-05T07:25:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-m5tnx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:25:01Z lastTimestamp:2025-11-05T07:25:21Z reason:Unhealthy]}" time="2025-11-05T07:25:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-5pmtp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:25:05Z lastTimestamp:2025-11-05T07:25:25Z reason:Unhealthy]}" time="2025-11-05T07:25:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:07991ae6d0 namespace:openshift-image-registry node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:image-registry-744d7b6578-bn2ks]}" message="{ProbeError Readiness probe error: Get \"https://10.128.2.6:5000/healthz\": dial tcp 10.128.2.6:5000: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:25:27Z lastTimestamp:2025-11-05T07:25:27Z reason:ProbeError]}" time="2025-11-05T07:25:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:5a667ef0cc namespace:openshift-image-registry node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:image-registry-744d7b6578-bn2ks]}" message="{Unhealthy Readiness probe failed: Get \"https://10.128.2.6:5000/healthz\": dial tcp 10.128.2.6:5000: connect: connection refused map[firstTimestamp:2025-11-05T07:25:27Z lastTimestamp:2025-11-05T07:25:27Z reason:Unhealthy]}" I1105 07:25:32.606669 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:25:35Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-5pmtp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:25:05Z lastTimestamp:2025-11-05T07:25:35Z reason:Unhealthy]}" time="2025-11-05T07:25:41Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-m5tnx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:25:01Z lastTimestamp:2025-11-05T07:25:41Z reason:Unhealthy]}" time="2025-11-05T07:25:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-5pmtp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:25:05Z lastTimestamp:2025-11-05T07:25:45Z reason:Unhealthy]}" time="2025-11-05T07:25:55Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-5pmtp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:25:05Z lastTimestamp:2025-11-05T07:25:55Z reason:Unhealthy]}" time="2025-11-05T07:26:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-m5tnx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:25:01Z lastTimestamp:2025-11-05T07:26:01Z reason:Unhealthy]}" time="2025-11-05T07:26:05Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-5pmtp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:25:05Z lastTimestamp:2025-11-05T07:26:05Z reason:Unhealthy]}" time="2025-11-05T07:26:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-5pmtp]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T07:25:05Z lastTimestamp:2025-11-05T07:26:15Z reason:Unhealthy]}" time="2025-11-05T07:26:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-m5tnx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:25:01Z lastTimestamp:2025-11-05T07:26:21Z reason:Unhealthy]}" I1105 07:26:32.909997 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:26:41Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-m5tnx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:25:01Z lastTimestamp:2025-11-05T07:26:41Z reason:Unhealthy]}" time="2025-11-05T07:27:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-m5tnx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:25:01Z lastTimestamp:2025-11-05T07:27:01Z reason:Unhealthy]}" time="2025-11-05T07:27:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-m5tnx]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T07:25:01Z lastTimestamp:2025-11-05T07:27:21Z reason:Unhealthy]}" I1105 07:27:33.197902 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStarted - ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt - 2025-11-05T07:27:44Z I1105 07:28:33.493091 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStaged - ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt - 2025-11-05T07:28:37Z time="2025-11-05T07:28:38Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:f4c7b60c32 namespace:openshift-ovn-kubernetes node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:ovnkube-node-v552j]}" message="{Unhealthy Readiness probe failed: + . /ovnkube-lib/ovnkube-lib.sh\n++ set -x\n++ K8S_NODE=\n++ [[ -n '' ]]\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\n+ ovndb-readiness-probe sb\n+ local dbname=sb\n+ [[ 1 -ne 1 ]]\n+ local ctlfile\n+ [[ sb = \\n\\b ]]\n+ [[ sb = \\s\\b ]]\n+ ctlfile=/var/run/ovn/ovnsb_db.ctl\n++ /usr/bin/ovn-appctl -t /var/run/ovn/ovnsb_db.ctl --timeout=3 ovsdb-server/sync-status\n map[firstTimestamp:2025-11-05T07:28:38Z lastTimestamp:2025-11-05T07:28:38Z reason:Unhealthy]}" time="2025-11-05T07:29:28Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:29:28Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:29:29Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:4 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:29:29Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:29:29Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-kchg8]}" message="{NodeNotReady Node is not ready map[firstTimestamp:2025-11-05T07:29:29Z lastTimestamp:2025-11-05T07:29:29Z reason:NodeNotReady]}" I1105 07:29:33.750695 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:29:53Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:f7fa0ea27b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T07:29:53Z lastTimestamp:2025-11-05T07:29:53Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:29:53Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:3a3c4cf390 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T07:29:53Z lastTimestamp:2025-11-05T07:29:53Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:29:53Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:506d7f331d node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T07:29:53Z lastTimestamp:2025-11-05T07:29:53Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:29:53Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:f7fa0ea27b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T07:29:53Z lastTimestamp:2025-11-05T07:29:53Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:29:53Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:3a3c4cf390 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T07:29:53Z lastTimestamp:2025-11-05T07:29:53Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:29:53Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:506d7f331d node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T07:29:53Z lastTimestamp:2025-11-05T07:29:53Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:29:53Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:f7fa0ea27b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T07:29:53Z lastTimestamp:2025-11-05T07:29:53Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:29:54Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:3a3c4cf390 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T07:29:53Z lastTimestamp:2025-11-05T07:29:53Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:29:54Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:506d7f331d node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T07:29:53Z lastTimestamp:2025-11-05T07:29:53Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:29:54Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:54Z reason:NetworkNotReady]}" time="2025-11-05T07:29:54Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:54Z reason:FailedMount]}" time="2025-11-05T07:29:54Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:54Z reason:FailedMount]}" time="2025-11-05T07:29:54Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:54Z reason:FailedMount]}" time="2025-11-05T07:29:54Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:54Z reason:FailedMount]}" time="2025-11-05T07:29:54Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:5 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:29:54Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:29:54Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:54Z reason:FailedMount]}" time="2025-11-05T07:29:55Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:54Z reason:FailedMount]}" time="2025-11-05T07:29:55Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:54Z reason:FailedMount]}" time="2025-11-05T07:29:55Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:55Z reason:FailedMount]}" time="2025-11-05T07:29:55Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:55Z reason:NetworkNotReady]}" time="2025-11-05T07:29:55Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:6 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:29:55Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:29:56Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:55Z reason:FailedMount]}" time="2025-11-05T07:29:56Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:55Z reason:FailedMount]}" time="2025-11-05T07:29:56Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:55Z reason:FailedMount]}" time="2025-11-05T07:29:56Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:56Z reason:FailedMount]}" time="2025-11-05T07:29:57Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:57Z reason:NetworkNotReady]}" time="2025-11-05T07:29:58Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:57Z reason:FailedMount]}" time="2025-11-05T07:29:58Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:57Z reason:FailedMount]}" time="2025-11-05T07:29:58Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:57Z reason:FailedMount]}" time="2025-11-05T07:29:58Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:58Z reason:FailedMount]}" time="2025-11-05T07:29:59Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:29:59Z reason:NetworkNotReady]}" time="2025-11-05T07:30:01Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:30:01Z reason:NetworkNotReady]}" time="2025-11-05T07:30:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:30:02Z reason:FailedMount]}" time="2025-11-05T07:30:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:30:02Z reason:FailedMount]}" time="2025-11-05T07:30:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:30:02Z reason:FailedMount]}" time="2025-11-05T07:30:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T07:29:54Z lastTimestamp:2025-11-05T07:30:02Z reason:FailedMount]}" time="2025-11-05T07:30:05Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:e7a751a213 namespace:openshift-ovn-kubernetes node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:ovnkube-node-v552j]}" message="{Unhealthy Readiness probe failed: map[firstTimestamp:2025-11-05T07:30:05Z lastTimestamp:2025-11-05T07:30:05Z reason:Unhealthy]}" time="2025-11-05T07:30:07Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:574c5d057e namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:gcp-pd-csi-driver-node-42zwr]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.4:10300/healthz\": dial tcp 10.0.128.4:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:30:07Z lastTimestamp:2025-11-05T07:30:07Z reason:ProbeError]}" time="2025-11-05T07:30:07Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d312da0f65 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:gcp-pd-csi-driver-node-42zwr]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.4:10300/healthz\": dial tcp 10.0.128.4:10300: connect: connection refused map[firstTimestamp:2025-11-05T07:30:07Z lastTimestamp:2025-11-05T07:30:07Z reason:Unhealthy]}" time="2025-11-05T07:30:10Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:9fc061b6e6 namespace:openshift-e2e-loki pod:loki-promtail-kchg8]}" message="{AddedInterface Add eth0 [10.128.2.4/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T07:30:10Z lastTimestamp:2025-11-05T07:30:10Z reason:AddedInterface]}" time="2025-11-05T07:30:10Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1769ebd414 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulled Container image \"quay.io/openshift-logging/promtail:v2.9.8\" already present on machine map[container:promtail firstTimestamp:2025-11-05T07:30:10Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T07:30:10Z reason:Pulled]}" time="2025-11-05T07:30:11Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T07:30:11Z lastTimestamp:2025-11-05T07:30:11Z reason:Created]}" time="2025-11-05T07:30:11Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T07:30:11Z lastTimestamp:2025-11-05T07:30:11Z reason:Started]}" time="2025-11-05T07:30:11Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ce1ec925c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulled Container image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" already present on machine map[container:oauth-proxy firstTimestamp:2025-11-05T07:30:11Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T07:30:11Z reason:Pulled]}" time="2025-11-05T07:30:11Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:3c6ea329ab namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 map[count:3 firstTimestamp:2025-11-05T07:24:42Z lastTimestamp:2025-11-05T07:30:11Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:30:12Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T07:30:12Z lastTimestamp:2025-11-05T07:30:12Z reason:Created]}" time="2025-11-05T07:30:12Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T07:30:12Z lastTimestamp:2025-11-05T07:30:12Z reason:Started]}" time="2025-11-05T07:30:12Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T07:30:12Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T07:30:12Z reason:Pulling]}" time="2025-11-05T07:30:12Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:3c6ea329ab namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 map[count:4 firstTimestamp:2025-11-05T07:24:42Z lastTimestamp:2025-11-05T07:30:12Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:30:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:d17f50239d namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 711ms (711ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T07:30:13Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T07:30:13Z reason:Pulled]}" time="2025-11-05T07:30:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T07:30:13Z lastTimestamp:2025-11-05T07:30:13Z reason:Created]}" time="2025-11-05T07:30:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T07:30:13Z lastTimestamp:2025-11-05T07:30:13Z reason:Started]}" time="2025-11-05T07:30:31Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-h58p6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:30:31Z lastTimestamp:2025-11-05T07:30:31Z reason:Unhealthy]}" time="2025-11-05T07:30:33Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:5c3a0c8511 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:monitoring-plugin-79f9bc6c-jg95p]}" message="{ProbeError Readiness probe error: Get \"https://10.128.2.10:9443/health\": dial tcp 10.128.2.10:9443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:30:33Z lastTimestamp:2025-11-05T07:30:33Z reason:ProbeError]}" time="2025-11-05T07:30:33Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e2b0d0a95d namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:monitoring-plugin-79f9bc6c-jg95p]}" message="{Unhealthy Readiness probe failed: Get \"https://10.128.2.10:9443/health\": dial tcp 10.128.2.10:9443: connect: connection refused map[firstTimestamp:2025-11-05T07:30:33Z lastTimestamp:2025-11-05T07:30:33Z reason:Unhealthy]}" I1105 07:30:34.029089 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:30:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-lhzn8]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:30:40Z lastTimestamp:2025-11-05T07:30:40Z reason:Unhealthy]}" time="2025-11-05T07:30:50Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-lhzn8]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:30:40Z lastTimestamp:2025-11-05T07:30:50Z reason:Unhealthy]}" time="2025-11-05T07:30:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-h58p6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:30:31Z lastTimestamp:2025-11-05T07:30:51Z reason:Unhealthy]}" time="2025-11-05T07:31:00Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-lhzn8]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:30:40Z lastTimestamp:2025-11-05T07:31:00Z reason:Unhealthy]}" time="2025-11-05T07:31:10Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-lhzn8]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:30:40Z lastTimestamp:2025-11-05T07:31:10Z reason:Unhealthy]}" time="2025-11-05T07:31:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-h58p6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:30:31Z lastTimestamp:2025-11-05T07:31:11Z reason:Unhealthy]}" time="2025-11-05T07:31:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-lhzn8]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:30:40Z lastTimestamp:2025-11-05T07:31:20Z reason:Unhealthy]}" time="2025-11-05T07:31:24Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:31:30Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-lhzn8]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:30:40Z lastTimestamp:2025-11-05T07:31:30Z reason:Unhealthy]}" time="2025-11-05T07:31:31Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-h58p6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:30:31Z lastTimestamp:2025-11-05T07:31:31Z reason:Unhealthy]}" I1105 07:31:34.342769 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:31:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-lhzn8]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:30:40Z lastTimestamp:2025-11-05T07:31:40Z reason:Unhealthy]}" time="2025-11-05T07:31:48Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-0]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:31:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-h58p6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:30:31Z lastTimestamp:2025-11-05T07:31:51Z reason:Unhealthy]}" time="2025-11-05T07:32:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-h58p6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:30:31Z lastTimestamp:2025-11-05T07:32:11Z reason:Unhealthy]}" time="2025-11-05T07:32:31Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-h58p6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:30:31Z lastTimestamp:2025-11-05T07:32:31Z reason:Unhealthy]}" I1105 07:32:34.598575 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:32:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-h58p6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T07:30:31Z lastTimestamp:2025-11-05T07:32:51Z reason:Unhealthy]}" Watch received OS update event: OSUpdateStarted - ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr - 2025-11-05T07:33:14Z I1105 07:33:34.845156 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStaged - ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr - 2025-11-05T07:34:00Z I1105 07:34:35.099580 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:34:54Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-0]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:34:54Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-0]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:34:54Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:7 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:34:54Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:34:54Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-tqnvt]}" message="{NodeNotReady Node is not ready map[firstTimestamp:2025-11-05T07:34:54Z lastTimestamp:2025-11-05T07:34:54Z reason:NodeNotReady]}" time="2025-11-05T07:35:11Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:4a36419b2b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T07:35:11Z lastTimestamp:2025-11-05T07:35:11Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:35:11Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:7af51874d8 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T07:35:11Z lastTimestamp:2025-11-05T07:35:11Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:35:11Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:be149cb561 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T07:35:11Z lastTimestamp:2025-11-05T07:35:11Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:35:11Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:4a36419b2b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T07:35:11Z lastTimestamp:2025-11-05T07:35:11Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:35:11Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:7af51874d8 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T07:35:11Z lastTimestamp:2025-11-05T07:35:11Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:35:11Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:be149cb561 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T07:35:11Z lastTimestamp:2025-11-05T07:35:11Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:35:12Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:4a36419b2b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T07:35:11Z lastTimestamp:2025-11-05T07:35:11Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:35:12Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:7af51874d8 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T07:35:11Z lastTimestamp:2025-11-05T07:35:11Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:35:12Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:be149cb561 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T07:35:11Z lastTimestamp:2025-11-05T07:35:11Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:35:13Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:4a36419b2b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientMemory map[count:4 firstTimestamp:2025-11-05T07:35:11Z lastTimestamp:2025-11-05T07:35:11Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:35:13Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:7af51874d8 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasNoDiskPressure map[count:4 firstTimestamp:2025-11-05T07:35:11Z lastTimestamp:2025-11-05T07:35:11Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:35:13Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:be149cb561 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientPID map[count:4 firstTimestamp:2025-11-05T07:35:11Z lastTimestamp:2025-11-05T07:35:11Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:35:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:12Z reason:NetworkNotReady]}" time="2025-11-05T07:35:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:12Z reason:FailedMount]}" time="2025-11-05T07:35:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:12Z reason:FailedMount]}" time="2025-11-05T07:35:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:12Z reason:FailedMount]}" time="2025-11-05T07:35:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:12Z reason:FailedMount]}" time="2025-11-05T07:35:13Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:8 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:35:12Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:35:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:13Z reason:FailedMount]}" time="2025-11-05T07:35:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:13Z reason:FailedMount]}" time="2025-11-05T07:35:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:13Z reason:FailedMount]}" time="2025-11-05T07:35:13Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:13Z reason:FailedMount]}" time="2025-11-05T07:35:13Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:9 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:35:13Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:35:14Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:14Z reason:FailedMount]}" time="2025-11-05T07:35:14Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:14Z reason:FailedMount]}" time="2025-11-05T07:35:14Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:14Z reason:FailedMount]}" time="2025-11-05T07:35:14Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:14Z reason:FailedMount]}" time="2025-11-05T07:35:14Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:14Z reason:NetworkNotReady]}" time="2025-11-05T07:35:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:16Z reason:FailedMount]}" time="2025-11-05T07:35:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:16Z reason:FailedMount]}" time="2025-11-05T07:35:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:16Z reason:FailedMount]}" time="2025-11-05T07:35:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:16Z reason:FailedMount]}" time="2025-11-05T07:35:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:16Z reason:NetworkNotReady]}" time="2025-11-05T07:35:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:18Z reason:NetworkNotReady]}" time="2025-11-05T07:35:20Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:20Z reason:FailedMount]}" time="2025-11-05T07:35:20Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:20Z reason:FailedMount]}" time="2025-11-05T07:35:20Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:20Z reason:FailedMount]}" time="2025-11-05T07:35:20Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:20Z reason:FailedMount]}" time="2025-11-05T07:35:20Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T07:35:12Z lastTimestamp:2025-11-05T07:35:20Z reason:NetworkNotReady]}" time="2025-11-05T07:35:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:0c8059276e namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.2:10303/healthz\": dial tcp 10.0.128.2:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:35:27Z lastTimestamp:2025-11-05T07:35:27Z reason:ProbeError]}" time="2025-11-05T07:35:27Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:b77166b047 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.2:10303/healthz\": dial tcp 10.0.128.2:10303: connect: connection refused map[firstTimestamp:2025-11-05T07:35:27Z lastTimestamp:2025-11-05T07:35:27Z reason:Unhealthy]}" time="2025-11-05T07:35:29Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:a942ede634 namespace:openshift-e2e-loki pod:loki-promtail-tqnvt]}" message="{AddedInterface Add eth0 [10.129.2.4/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T07:35:28Z lastTimestamp:2025-11-05T07:35:28Z reason:AddedInterface]}" time="2025-11-05T07:35:29Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1769ebd414 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulled Container image \"quay.io/openshift-logging/promtail:v2.9.8\" already present on machine map[container:promtail firstTimestamp:2025-11-05T07:35:29Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T07:35:29Z reason:Pulled]}" time="2025-11-05T07:35:29Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:77b1142bbf namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.2:10300/healthz\": dial tcp 10.0.128.2:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:35:29Z lastTimestamp:2025-11-05T07:35:29Z reason:ProbeError]}" time="2025-11-05T07:35:29Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:ce75dd64b5 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.2:10300/healthz\": dial tcp 10.0.128.2:10300: connect: connection refused map[firstTimestamp:2025-11-05T07:35:29Z lastTimestamp:2025-11-05T07:35:29Z reason:Unhealthy]}" time="2025-11-05T07:35:30Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:3c6ea329ab namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 map[count:5 firstTimestamp:2025-11-05T07:24:42Z lastTimestamp:2025-11-05T07:35:29Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:35:30Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T07:35:30Z lastTimestamp:2025-11-05T07:35:30Z reason:Created]}" time="2025-11-05T07:35:30Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T07:35:30Z lastTimestamp:2025-11-05T07:35:30Z reason:Started]}" time="2025-11-05T07:35:30Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ce1ec925c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulled Container image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" already present on machine map[container:oauth-proxy firstTimestamp:2025-11-05T07:35:30Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T07:35:30Z reason:Pulled]}" time="2025-11-05T07:35:31Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:3c6ea329ab namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 map[count:6 firstTimestamp:2025-11-05T07:24:42Z lastTimestamp:2025-11-05T07:35:30Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:35:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T07:35:31Z lastTimestamp:2025-11-05T07:35:31Z reason:Created]}" time="2025-11-05T07:35:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T07:35:31Z lastTimestamp:2025-11-05T07:35:31Z reason:Started]}" time="2025-11-05T07:35:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T07:35:31Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T07:35:31Z reason:Pulling]}" time="2025-11-05T07:35:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:09076621c7 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 668ms (668ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T07:35:31Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T07:35:31Z reason:Pulled]}" time="2025-11-05T07:35:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T07:35:32Z lastTimestamp:2025-11-05T07:35:32Z reason:Created]}" time="2025-11-05T07:35:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T07:35:32Z lastTimestamp:2025-11-05T07:35:32Z reason:Started]}" I1105 07:35:35.379187 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:36:04Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:83768cdc76 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:3 firstTimestamp:2025-11-05T06:50:45Z lastTimestamp:2025-11-05T07:36:04Z reason:SetDesiredConfig]}" time="2025-11-05T07:36:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-v26gk]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:36:15Z lastTimestamp:2025-11-05T07:36:15Z reason:Unhealthy]}" time="2025-11-05T07:36:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-v26gk]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:36:15Z lastTimestamp:2025-11-05T07:36:25Z reason:Unhealthy]}" time="2025-11-05T07:36:29Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-tqmgb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:36:29Z lastTimestamp:2025-11-05T07:36:29Z reason:Unhealthy]}" time="2025-11-05T07:36:35Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-v26gk]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:36:15Z lastTimestamp:2025-11-05T07:36:35Z reason:Unhealthy]}" I1105 07:36:35.628072 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:36:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-v26gk]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:36:15Z lastTimestamp:2025-11-05T07:36:45Z reason:Unhealthy]}" time="2025-11-05T07:36:49Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-tqmgb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:36:29Z lastTimestamp:2025-11-05T07:36:49Z reason:Unhealthy]}" time="2025-11-05T07:36:55Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-v26gk]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:36:15Z lastTimestamp:2025-11-05T07:36:55Z reason:Unhealthy]}" time="2025-11-05T07:37:09Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-tqmgb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:36:29Z lastTimestamp:2025-11-05T07:37:09Z reason:Unhealthy]}" time="2025-11-05T07:37:29Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-tqmgb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:36:29Z lastTimestamp:2025-11-05T07:37:29Z reason:Unhealthy]}" I1105 07:37:36.049089 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:37:49Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-tqmgb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:36:29Z lastTimestamp:2025-11-05T07:37:49Z reason:Unhealthy]}" time="2025-11-05T07:38:09Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-tqmgb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:36:29Z lastTimestamp:2025-11-05T07:38:09Z reason:Unhealthy]}" time="2025-11-05T07:38:29Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-tqmgb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:36:29Z lastTimestamp:2025-11-05T07:38:29Z reason:Unhealthy]}" I1105 07:38:36.368124 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStarted - ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 - 2025-11-05T07:38:56Z I1105 07:39:36.660680 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStaged - ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 - 2025-11-05T07:39:47Z time="2025-11-05T07:40:36Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-4k6zx]}" message="{NodeNotReady Node is not ready map[count:2 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:40:36Z reason:NodeNotReady]}" time="2025-11-05T07:40:36Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:10 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:40:36Z reason:TopologyAwareHintsDisabled]}" I1105 07:40:36.939781 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:40:37Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:11 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:40:37Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:41:31Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T07:41:31Z lastTimestamp:2025-11-05T07:41:31Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:41:31Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T07:41:31Z lastTimestamp:2025-11-05T07:41:31Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:41:31Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T07:41:31Z lastTimestamp:2025-11-05T07:41:31Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:41:32Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T07:41:31Z lastTimestamp:2025-11-05T07:41:31Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:41:32Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T07:41:31Z lastTimestamp:2025-11-05T07:41:31Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:41:32Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T07:41:31Z lastTimestamp:2025-11-05T07:41:31Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:41:32Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T07:41:31Z lastTimestamp:2025-11-05T07:41:32Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:41:32Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T07:41:31Z lastTimestamp:2025-11-05T07:41:32Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:41:32Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T07:41:31Z lastTimestamp:2025-11-05T07:41:32Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:41:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:32Z reason:NetworkNotReady]}" time="2025-11-05T07:41:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:32Z reason:FailedMount]}" time="2025-11-05T07:41:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:32Z reason:FailedMount]}" time="2025-11-05T07:41:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:32Z reason:FailedMount]}" time="2025-11-05T07:41:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:32Z reason:FailedMount]}" time="2025-11-05T07:41:33Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:12 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:41:33Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:41:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:33Z reason:FailedMount]}" time="2025-11-05T07:41:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:33Z reason:FailedMount]}" time="2025-11-05T07:41:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:33Z reason:FailedMount]}" time="2025-11-05T07:41:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:33Z reason:FailedMount]}" time="2025-11-05T07:41:34Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:13 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:41:34Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:41:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:34Z reason:FailedMount]}" time="2025-11-05T07:41:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:34Z reason:FailedMount]}" time="2025-11-05T07:41:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:34Z reason:FailedMount]}" time="2025-11-05T07:41:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:34Z reason:FailedMount]}" time="2025-11-05T07:41:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:34Z reason:NetworkNotReady]}" time="2025-11-05T07:41:36Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:36Z reason:FailedMount]}" time="2025-11-05T07:41:36Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:36Z reason:FailedMount]}" time="2025-11-05T07:41:36Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:36Z reason:FailedMount]}" time="2025-11-05T07:41:36Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:36Z reason:FailedMount]}" time="2025-11-05T07:41:36Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:36Z reason:NetworkNotReady]}" I1105 07:41:37.202981 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:41:38Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:38Z reason:NetworkNotReady]}" time="2025-11-05T07:41:40Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:40Z reason:FailedMount]}" time="2025-11-05T07:41:40Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:40Z reason:FailedMount]}" time="2025-11-05T07:41:40Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:40Z reason:FailedMount]}" time="2025-11-05T07:41:40Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:40Z reason:FailedMount]}" time="2025-11-05T07:41:40Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T07:41:32Z lastTimestamp:2025-11-05T07:41:40Z reason:NetworkNotReady]}" time="2025-11-05T07:41:45Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:416a528720 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.3:10300/healthz\": dial tcp 10.0.128.3:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:41:45Z lastTimestamp:2025-11-05T07:41:45Z reason:ProbeError]}" time="2025-11-05T07:41:45Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:68683c9410 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.3:10300/healthz\": dial tcp 10.0.128.3:10300: connect: connection refused map[firstTimestamp:2025-11-05T07:41:45Z lastTimestamp:2025-11-05T07:41:45Z reason:Unhealthy]}" time="2025-11-05T07:41:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:064786e2fe namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.3:10303/healthz\": dial tcp 10.0.128.3:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:41:46Z lastTimestamp:2025-11-05T07:41:46Z reason:ProbeError]}" time="2025-11-05T07:41:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e172d2e44c namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.3:10303/healthz\": dial tcp 10.0.128.3:10303: connect: connection refused map[firstTimestamp:2025-11-05T07:41:46Z lastTimestamp:2025-11-05T07:41:46Z reason:Unhealthy]}" time="2025-11-05T07:41:49Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:cd29a577c1 namespace:openshift-e2e-loki pod:loki-promtail-4k6zx]}" message="{AddedInterface Add eth0 [10.131.0.3/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T07:41:49Z lastTimestamp:2025-11-05T07:41:49Z reason:AddedInterface]}" time="2025-11-05T07:41:49Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1769ebd414 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Container image \"quay.io/openshift-logging/promtail:v2.9.8\" already present on machine map[container:promtail firstTimestamp:2025-11-05T07:41:49Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T07:41:49Z reason:Pulled]}" time="2025-11-05T07:41:50Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:3c6ea329ab namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 map[count:7 firstTimestamp:2025-11-05T07:24:42Z lastTimestamp:2025-11-05T07:41:50Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:41:50Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T07:41:50Z lastTimestamp:2025-11-05T07:41:50Z reason:Created]}" time="2025-11-05T07:41:50Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T07:41:50Z lastTimestamp:2025-11-05T07:41:50Z reason:Started]}" time="2025-11-05T07:41:50Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ce1ec925c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Container image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" already present on machine map[container:oauth-proxy firstTimestamp:2025-11-05T07:41:50Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T07:41:50Z reason:Pulled]}" time="2025-11-05T07:41:51Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:3c6ea329ab namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 map[count:8 firstTimestamp:2025-11-05T07:24:42Z lastTimestamp:2025-11-05T07:41:51Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:41:51Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T07:41:51Z lastTimestamp:2025-11-05T07:41:51Z reason:Created]}" time="2025-11-05T07:41:51Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T07:41:51Z lastTimestamp:2025-11-05T07:41:51Z reason:Started]}" time="2025-11-05T07:41:51Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T07:41:51Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T07:41:51Z reason:Pulling]}" time="2025-11-05T07:41:52Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:17efe72d4f namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 700ms (700ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T07:41:52Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T07:41:52Z reason:Pulled]}" time="2025-11-05T07:41:52Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T07:41:52Z lastTimestamp:2025-11-05T07:41:52Z reason:Created]}" time="2025-11-05T07:41:52Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T07:41:52Z lastTimestamp:2025-11-05T07:41:52Z reason:Started]}" time="2025-11-05T07:42:01Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:66d66c84b6 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:2 firstTimestamp:2025-11-05T06:51:11Z lastTimestamp:2025-11-05T07:42:01Z reason:SetDesiredConfig]}" time="2025-11-05T07:42:11Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d66016415a namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:thanos-querier-8649978c8-ttf2d]}" message="{ProbeError Readiness probe error: Get \"https://10.128.2.11:9091/-/ready\": dial tcp 10.128.2.11:9091: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:42:11Z lastTimestamp:2025-11-05T07:42:11Z reason:ProbeError]}" time="2025-11-05T07:42:11Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:c7e9852f44 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:thanos-querier-8649978c8-ttf2d]}" message="{Unhealthy Readiness probe failed: Get \"https://10.128.2.11:9091/-/ready\": dial tcp 10.128.2.11:9091: connect: connection refused map[firstTimestamp:2025-11-05T07:42:11Z lastTimestamp:2025-11-05T07:42:11Z reason:Unhealthy]}" time="2025-11-05T07:42:12Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:42:12Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:888aee621e namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-78pq2]}" message="{ProbeError Startup probe error: Get \"http://10.131.0.10:1936/healthz/ready\": dial tcp 10.131.0.10:1936: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:42:12Z lastTimestamp:2025-11-05T07:42:12Z reason:ProbeError]}" time="2025-11-05T07:42:12Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:85e7277d63 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-78pq2]}" message="{Unhealthy Startup probe failed: Get \"http://10.131.0.10:1936/healthz/ready\": dial tcp 10.131.0.10:1936: connect: connection refused map[firstTimestamp:2025-11-05T07:42:12Z lastTimestamp:2025-11-05T07:42:12Z reason:Unhealthy]}" time="2025-11-05T07:42:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-gfqvw]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:42:15Z reason:Unhealthy]}" time="2025-11-05T07:42:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fb4nf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:42:15Z reason:Unhealthy]}" time="2025-11-05T07:42:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-gfqvw]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:42:25Z reason:Unhealthy]}" time="2025-11-05T07:42:35Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-gfqvw]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:42:35Z reason:Unhealthy]}" time="2025-11-05T07:42:35Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fb4nf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:42:35Z reason:Unhealthy]}" I1105 07:42:37.462126 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:42:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-gfqvw]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:42:45Z reason:Unhealthy]}" time="2025-11-05T07:42:55Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-gfqvw]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:42:55Z reason:Unhealthy]}" time="2025-11-05T07:42:55Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fb4nf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:42:55Z reason:Unhealthy]}" time="2025-11-05T07:43:05Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-gfqvw]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:43:05Z reason:Unhealthy]}" time="2025-11-05T07:43:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-gfqvw]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:43:15Z reason:Unhealthy]}" time="2025-11-05T07:43:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fb4nf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:43:15Z reason:Unhealthy]}" time="2025-11-05T07:43:24Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-0]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:43:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-gfqvw]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:43:25Z reason:Unhealthy]}" time="2025-11-05T07:43:35Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fb4nf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:43:35Z reason:Unhealthy]}" I1105 07:43:37.715423 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:43:55Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fb4nf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:43:55Z reason:Unhealthy]}" time="2025-11-05T07:44:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fb4nf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:44:15Z reason:Unhealthy]}" time="2025-11-05T07:44:35Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fb4nf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T07:42:15Z lastTimestamp:2025-11-05T07:44:35Z reason:Unhealthy]}" I1105 07:44:37.963179 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStarted - ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt - 2025-11-05T07:44:54Z I1105 07:45:38.244872 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStaged - ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt - 2025-11-05T07:45:44Z time="2025-11-05T07:46:36Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:46:36Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:46:36Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:14 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:46:36Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:46:37Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-kchg8]}" message="{NodeNotReady Node is not ready map[count:2 firstTimestamp:2025-11-05T07:29:29Z lastTimestamp:2025-11-05T07:46:37Z reason:NodeNotReady]}" time="2025-11-05T07:46:37Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:15 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:46:37Z reason:TopologyAwareHintsDisabled]}" I1105 07:46:38.492663 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:47:29Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:f7fa0ea27b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T07:47:29Z lastTimestamp:2025-11-05T07:47:29Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:47:29Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:3a3c4cf390 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T07:47:29Z lastTimestamp:2025-11-05T07:47:29Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:47:29Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:506d7f331d node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T07:47:29Z lastTimestamp:2025-11-05T07:47:29Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:47:29Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:f7fa0ea27b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T07:47:29Z lastTimestamp:2025-11-05T07:47:29Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:47:29Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:3a3c4cf390 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T07:47:29Z lastTimestamp:2025-11-05T07:47:29Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:47:29Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:506d7f331d node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T07:47:29Z lastTimestamp:2025-11-05T07:47:29Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:47:29Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:f7fa0ea27b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T07:47:29Z lastTimestamp:2025-11-05T07:47:29Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:47:29Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:3a3c4cf390 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T07:47:29Z lastTimestamp:2025-11-05T07:47:29Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:47:30Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:506d7f331d node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T07:47:29Z lastTimestamp:2025-11-05T07:47:29Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:47:30Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:f7fa0ea27b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientMemory map[count:4 firstTimestamp:2025-11-05T07:47:29Z lastTimestamp:2025-11-05T07:47:29Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:47:30Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:3a3c4cf390 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasNoDiskPressure map[count:4 firstTimestamp:2025-11-05T07:47:29Z lastTimestamp:2025-11-05T07:47:29Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:506d7f331d node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientPID map[count:4 firstTimestamp:2025-11-05T07:47:29Z lastTimestamp:2025-11-05T07:47:29Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:30Z reason:NetworkNotReady]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:30Z reason:FailedMount]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:30Z reason:FailedMount]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:30Z reason:FailedMount]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:30Z reason:FailedMount]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:16 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:47:30Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:30Z reason:FailedMount]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:30Z reason:FailedMount]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:30Z reason:FailedMount]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:30Z reason:FailedMount]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:17 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:47:31Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:31Z reason:FailedMount]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:31Z reason:FailedMount]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:31Z reason:FailedMount]}" time="2025-11-05T07:47:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:31Z reason:FailedMount]}" time="2025-11-05T07:47:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:32Z reason:NetworkNotReady]}" time="2025-11-05T07:47:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:33Z reason:FailedMount]}" time="2025-11-05T07:47:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:33Z reason:FailedMount]}" time="2025-11-05T07:47:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:33Z reason:FailedMount]}" time="2025-11-05T07:47:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:33Z reason:FailedMount]}" time="2025-11-05T07:47:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:34Z reason:NetworkNotReady]}" time="2025-11-05T07:47:36Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:36Z reason:NetworkNotReady]}" time="2025-11-05T07:47:37Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:37Z reason:FailedMount]}" time="2025-11-05T07:47:37Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:37Z reason:FailedMount]}" time="2025-11-05T07:47:37Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:37Z reason:FailedMount]}" time="2025-11-05T07:47:37Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:37Z reason:FailedMount]}" time="2025-11-05T07:47:38Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T07:47:30Z lastTimestamp:2025-11-05T07:47:38Z reason:NetworkNotReady]}" I1105 07:47:38.737658 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:47:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:ff804f9505 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:gcp-pd-csi-driver-node-42zwr]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.4:10303/healthz\": dial tcp 10.0.128.4:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:47:46Z lastTimestamp:2025-11-05T07:47:46Z reason:ProbeError]}" time="2025-11-05T07:47:46Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:c97f0f2313 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:gcp-pd-csi-driver-node-42zwr]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.4:10303/healthz\": dial tcp 10.0.128.4:10303: connect: connection refused map[firstTimestamp:2025-11-05T07:47:46Z lastTimestamp:2025-11-05T07:47:46Z reason:Unhealthy]}" time="2025-11-05T07:47:46Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:9fc061b6e6 namespace:openshift-e2e-loki pod:loki-promtail-kchg8]}" message="{AddedInterface Add eth0 [10.128.2.4/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T07:47:46Z lastTimestamp:2025-11-05T07:47:46Z reason:AddedInterface]}" time="2025-11-05T07:47:46Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1769ebd414 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulled Container image \"quay.io/openshift-logging/promtail:v2.9.8\" already present on machine map[container:promtail firstTimestamp:2025-11-05T07:47:46Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T07:47:46Z reason:Pulled]}" time="2025-11-05T07:47:47Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:574c5d057e namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:gcp-pd-csi-driver-node-42zwr]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.4:10300/healthz\": dial tcp 10.0.128.4:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:47:47Z lastTimestamp:2025-11-05T07:47:47Z reason:ProbeError]}" time="2025-11-05T07:47:47Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d312da0f65 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:gcp-pd-csi-driver-node-42zwr]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.4:10300/healthz\": dial tcp 10.0.128.4:10300: connect: connection refused map[firstTimestamp:2025-11-05T07:47:47Z lastTimestamp:2025-11-05T07:47:47Z reason:Unhealthy]}" time="2025-11-05T07:47:47Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T07:47:47Z lastTimestamp:2025-11-05T07:47:47Z reason:Created]}" time="2025-11-05T07:47:47Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T07:47:47Z lastTimestamp:2025-11-05T07:47:47Z reason:Started]}" time="2025-11-05T07:47:47Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ce1ec925c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulled Container image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" already present on machine map[container:oauth-proxy firstTimestamp:2025-11-05T07:47:47Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T07:47:47Z reason:Pulled]}" time="2025-11-05T07:47:47Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:3c6ea329ab namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 map[count:9 firstTimestamp:2025-11-05T07:24:42Z lastTimestamp:2025-11-05T07:47:47Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:47:47Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T07:47:47Z lastTimestamp:2025-11-05T07:47:47Z reason:Created]}" time="2025-11-05T07:47:47Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T07:47:47Z lastTimestamp:2025-11-05T07:47:47Z reason:Started]}" time="2025-11-05T07:47:47Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T07:47:47Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T07:47:47Z reason:Pulling]}" time="2025-11-05T07:47:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:e99b9efc79 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 687ms (687ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T07:47:48Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T07:47:48Z reason:Pulled]}" time="2025-11-05T07:47:48Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:3c6ea329ab namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 map[count:10 firstTimestamp:2025-11-05T07:24:42Z lastTimestamp:2025-11-05T07:47:48Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:47:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T07:47:48Z lastTimestamp:2025-11-05T07:47:48Z reason:Created]}" time="2025-11-05T07:47:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T07:47:48Z lastTimestamp:2025-11-05T07:47:48Z reason:Started]}" time="2025-11-05T07:47:59Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:16a31e5783 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:2 firstTimestamp:2025-11-05T06:51:43Z lastTimestamp:2025-11-05T07:47:59Z reason:SetDesiredConfig]}" time="2025-11-05T07:48:08Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:6720352030 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:thanos-querier-8649978c8-5tnv9]}" message="{ProbeError Readiness probe error: Get \"https://10.129.2.12:9091/-/ready\": dial tcp 10.129.2.12:9091: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:48:08Z lastTimestamp:2025-11-05T07:48:08Z reason:ProbeError]}" time="2025-11-05T07:48:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:b2de4ad5b8 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:thanos-querier-8649978c8-5tnv9]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.2.12:9091/-/ready\": dial tcp 10.129.2.12:9091: connect: connection refused map[firstTimestamp:2025-11-05T07:48:08Z lastTimestamp:2025-11-05T07:48:08Z reason:Unhealthy]}" time="2025-11-05T07:48:16Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-kbwh5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:48:16Z lastTimestamp:2025-11-05T07:48:16Z reason:Unhealthy]}" time="2025-11-05T07:48:17Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-qnm57]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T07:48:17Z lastTimestamp:2025-11-05T07:48:17Z reason:Unhealthy]}" time="2025-11-05T07:48:27Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-qnm57]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:48:17Z lastTimestamp:2025-11-05T07:48:27Z reason:Unhealthy]}" time="2025-11-05T07:48:36Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-kbwh5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T07:48:16Z lastTimestamp:2025-11-05T07:48:36Z reason:Unhealthy]}" time="2025-11-05T07:48:37Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-qnm57]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:48:17Z lastTimestamp:2025-11-05T07:48:37Z reason:Unhealthy]}" I1105 07:48:39.000304 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:48:45Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-0]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:48:47Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-qnm57]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:48:17Z lastTimestamp:2025-11-05T07:48:47Z reason:Unhealthy]}" time="2025-11-05T07:48:56Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-kbwh5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T07:48:16Z lastTimestamp:2025-11-05T07:48:56Z reason:Unhealthy]}" time="2025-11-05T07:48:57Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-qnm57]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:48:17Z lastTimestamp:2025-11-05T07:48:57Z reason:Unhealthy]}" time="2025-11-05T07:49:16Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-kbwh5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T07:48:16Z lastTimestamp:2025-11-05T07:49:16Z reason:Unhealthy]}" time="2025-11-05T07:49:36Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-kbwh5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T07:48:16Z lastTimestamp:2025-11-05T07:49:36Z reason:Unhealthy]}" I1105 07:49:39.306915 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:49:56Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-kbwh5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T07:48:16Z lastTimestamp:2025-11-05T07:49:56Z reason:Unhealthy]}" time="2025-11-05T07:50:16Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-kbwh5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T07:48:16Z lastTimestamp:2025-11-05T07:50:16Z reason:Unhealthy]}" time="2025-11-05T07:50:36Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-kbwh5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T07:48:16Z lastTimestamp:2025-11-05T07:50:36Z reason:Unhealthy]}" I1105 07:50:39.551046 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStarted - ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr - 2025-11-05T07:50:51Z Watch received OS update event: OSUpdateStaged - ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr - 2025-11-05T07:51:37Z I1105 07:51:39.830541 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:52:27Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-0]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:52:27Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-0]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T07:52:28Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:18 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:52:27Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:52:28Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-tqnvt]}" message="{NodeNotReady Node is not ready map[count:2 firstTimestamp:2025-11-05T07:34:54Z lastTimestamp:2025-11-05T07:52:28Z reason:NodeNotReady]}" I1105 07:52:40.078633 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:53:23Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:4a36419b2b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T07:53:23Z lastTimestamp:2025-11-05T07:53:23Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:53:23Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:7af51874d8 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T07:53:23Z lastTimestamp:2025-11-05T07:53:23Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:53:23Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:be149cb561 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T07:53:23Z lastTimestamp:2025-11-05T07:53:23Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:53:23Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:4a36419b2b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T07:53:23Z lastTimestamp:2025-11-05T07:53:23Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:53:23Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:7af51874d8 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T07:53:23Z lastTimestamp:2025-11-05T07:53:23Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:53:23Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:be149cb561 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T07:53:23Z lastTimestamp:2025-11-05T07:53:23Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:53:23Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:4a36419b2b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T07:53:23Z lastTimestamp:2025-11-05T07:53:23Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T07:53:24Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:7af51874d8 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T07:53:23Z lastTimestamp:2025-11-05T07:53:23Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T07:53:24Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:be149cb561 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T07:53:23Z lastTimestamp:2025-11-05T07:53:23Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T07:53:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:24Z reason:NetworkNotReady]}" time="2025-11-05T07:53:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:24Z reason:FailedMount]}" time="2025-11-05T07:53:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:24Z reason:FailedMount]}" time="2025-11-05T07:53:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:24Z reason:FailedMount]}" time="2025-11-05T07:53:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:24Z reason:FailedMount]}" time="2025-11-05T07:53:24Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:19 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:53:24Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:53:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:24Z reason:FailedMount]}" time="2025-11-05T07:53:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:24Z reason:FailedMount]}" time="2025-11-05T07:53:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:24Z reason:FailedMount]}" time="2025-11-05T07:53:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:24Z reason:FailedMount]}" time="2025-11-05T07:53:25Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:20 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T07:53:25Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T07:53:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:25Z reason:FailedMount]}" time="2025-11-05T07:53:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:25Z reason:FailedMount]}" time="2025-11-05T07:53:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:25Z reason:FailedMount]}" time="2025-11-05T07:53:26Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:25Z reason:FailedMount]}" time="2025-11-05T07:53:26Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:26Z reason:NetworkNotReady]}" time="2025-11-05T07:53:27Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:27Z reason:FailedMount]}" time="2025-11-05T07:53:27Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:27Z reason:FailedMount]}" time="2025-11-05T07:53:27Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:27Z reason:FailedMount]}" time="2025-11-05T07:53:28Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:27Z reason:FailedMount]}" time="2025-11-05T07:53:28Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:28Z reason:NetworkNotReady]}" time="2025-11-05T07:53:30Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:30Z reason:NetworkNotReady]}" time="2025-11-05T07:53:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:31Z reason:FailedMount]}" time="2025-11-05T07:53:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:31Z reason:FailedMount]}" time="2025-11-05T07:53:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:31Z reason:FailedMount]}" time="2025-11-05T07:53:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:31Z reason:FailedMount]}" time="2025-11-05T07:53:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T07:53:24Z lastTimestamp:2025-11-05T07:53:32Z reason:NetworkNotReady]}" I1105 07:53:40.358033 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:53:40Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:0c8059276e namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.2:10303/healthz\": dial tcp 10.0.128.2:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:53:40Z lastTimestamp:2025-11-05T07:53:40Z reason:ProbeError]}" time="2025-11-05T07:53:40Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:b77166b047 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.2:10303/healthz\": dial tcp 10.0.128.2:10303: connect: connection refused map[firstTimestamp:2025-11-05T07:53:40Z lastTimestamp:2025-11-05T07:53:40Z reason:Unhealthy]}" time="2025-11-05T07:53:40Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:a942ede634 namespace:openshift-e2e-loki pod:loki-promtail-tqnvt]}" message="{AddedInterface Add eth0 [10.129.2.4/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T07:53:40Z lastTimestamp:2025-11-05T07:53:40Z reason:AddedInterface]}" time="2025-11-05T07:53:40Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1769ebd414 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulled Container image \"quay.io/openshift-logging/promtail:v2.9.8\" already present on machine map[container:promtail firstTimestamp:2025-11-05T07:53:40Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T07:53:40Z reason:Pulled]}" time="2025-11-05T07:53:41Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:77b1142bbf namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.2:10300/healthz\": dial tcp 10.0.128.2:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T07:53:41Z lastTimestamp:2025-11-05T07:53:41Z reason:ProbeError]}" time="2025-11-05T07:53:41Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:ce75dd64b5 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.2:10300/healthz\": dial tcp 10.0.128.2:10300: connect: connection refused map[firstTimestamp:2025-11-05T07:53:41Z lastTimestamp:2025-11-05T07:53:41Z reason:Unhealthy]}" time="2025-11-05T07:53:42Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T07:53:42Z lastTimestamp:2025-11-05T07:53:42Z reason:Created]}" time="2025-11-05T07:53:42Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T07:53:42Z lastTimestamp:2025-11-05T07:53:42Z reason:Started]}" time="2025-11-05T07:53:42Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ce1ec925c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulled Container image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" already present on machine map[container:oauth-proxy firstTimestamp:2025-11-05T07:53:42Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T07:53:42Z reason:Pulled]}" time="2025-11-05T07:53:42Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T07:53:42Z lastTimestamp:2025-11-05T07:53:42Z reason:Created]}" time="2025-11-05T07:53:42Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T07:53:42Z lastTimestamp:2025-11-05T07:53:42Z reason:Started]}" time="2025-11-05T07:53:42Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T07:53:42Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T07:53:42Z reason:Pulling]}" time="2025-11-05T07:53:43Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:647335b5bd namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 641ms (641ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T07:53:43Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T07:53:43Z reason:Pulled]}" time="2025-11-05T07:53:43Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T07:53:43Z lastTimestamp:2025-11-05T07:53:43Z reason:Created]}" time="2025-11-05T07:53:43Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T07:53:43Z lastTimestamp:2025-11-05T07:53:43Z reason:Started]}" passed: (40m23s) 2025-11-05T07:54:07 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO ocb PolarionID:83140-A MachineOSConfig with custom containerfile definition can be successfully applied" started: 22/53/55 "[sig-mco][Suite:openshift/machine-config-operator/disruptive][Serial][Disruptive] MCO ocb PolarionID:77781-A successfully built MachineOSConfig can be re-build" I1105 07:54:40.604810 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T07:55:24Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" I1105 07:55:40.842089 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 07:56:41.084966 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 07:57:41.327737 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 07:58:41.731834 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 07:59:41.992466 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 08:00:42.270375 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:01:12Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:9ff69c5211 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:monitoring-plugin-79f9bc6c-k2sk5]}" message="{ProbeError Readiness probe error: Get \"https://10.131.0.8:9443/health\": dial tcp 10.131.0.8:9443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:01:12Z lastTimestamp:2025-11-05T08:01:12Z reason:ProbeError]}" time="2025-11-05T08:01:12Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:30d6caa879 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:monitoring-plugin-79f9bc6c-k2sk5]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.0.8:9443/health\": dial tcp 10.131.0.8:9443: connect: connection refused map[firstTimestamp:2025-11-05T08:01:12Z lastTimestamp:2025-11-05T08:01:12Z reason:Unhealthy]}" time="2025-11-05T08:01:12Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:af07c00410 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:prometheus-operator-admission-webhook-678bdc6597-42bsf]}" message="{ProbeError Readiness probe error: Get \"https://10.131.0.7:8443/healthz\": dial tcp 10.131.0.7:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:01:12Z lastTimestamp:2025-11-05T08:01:12Z reason:ProbeError]}" time="2025-11-05T08:01:12Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:6bdc07d44f namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:prometheus-operator-admission-webhook-678bdc6597-42bsf]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.0.7:8443/healthz\": dial tcp 10.131.0.7:8443: connect: connection refused map[firstTimestamp:2025-11-05T08:01:12Z lastTimestamp:2025-11-05T08:01:12Z reason:Unhealthy]}" time="2025-11-05T08:01:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-pkq5d]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:01:13Z lastTimestamp:2025-11-05T08:01:13Z reason:Unhealthy]}" time="2025-11-05T08:01:13Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:4eb8080deb namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:monitoring-plugin-79f9bc6c-wrf4d]}" message="{ProbeError Readiness probe error: Get \"https://10.129.2.12:9443/health\": dial tcp 10.129.2.12:9443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:01:13Z lastTimestamp:2025-11-05T08:01:13Z reason:ProbeError]}" time="2025-11-05T08:01:13Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:87e70c40fb namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:monitoring-plugin-79f9bc6c-wrf4d]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.2.12:9443/health\": dial tcp 10.129.2.12:9443: connect: connection refused map[firstTimestamp:2025-11-05T08:01:13Z lastTimestamp:2025-11-05T08:01:13Z reason:Unhealthy]}" time="2025-11-05T08:01:13Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:f3aa7e42d0 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-zmbnb]}" message="{ProbeError Startup probe error: Get \"http://10.129.2.13:1936/healthz/ready\": dial tcp 10.129.2.13:1936: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:01:13Z lastTimestamp:2025-11-05T08:01:13Z reason:ProbeError]}" time="2025-11-05T08:01:13Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d929306fb1 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-zmbnb]}" message="{Unhealthy Startup probe failed: Get \"http://10.129.2.13:1936/healthz/ready\": dial tcp 10.129.2.13:1936: connect: connection refused map[firstTimestamp:2025-11-05T08:01:13Z lastTimestamp:2025-11-05T08:01:13Z reason:Unhealthy]}" time="2025-11-05T08:01:14Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:faad63ce5a namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:thanos-querier-8649978c8-97rkm]}" message="{ProbeError Readiness probe error: Get \"https://10.131.0.15:9091/-/ready\": dial tcp 10.131.0.15:9091: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:01:14Z lastTimestamp:2025-11-05T08:01:14Z reason:ProbeError]}" time="2025-11-05T08:01:14Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2fcadd67e0 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:thanos-querier-8649978c8-97rkm]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.0.15:9091/-/ready\": dial tcp 10.131.0.15:9091: connect: connection refused map[firstTimestamp:2025-11-05T08:01:14Z lastTimestamp:2025-11-05T08:01:14Z reason:Unhealthy]}" time="2025-11-05T08:01:14Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-78pq2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:01:14Z lastTimestamp:2025-11-05T08:01:14Z reason:Unhealthy]}" time="2025-11-05T08:01:24Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-0]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T08:01:24Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-78pq2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:01:14Z lastTimestamp:2025-11-05T08:01:24Z reason:Unhealthy]}" time="2025-11-05T08:01:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-pkq5d]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:01:13Z lastTimestamp:2025-11-05T08:01:33Z reason:Unhealthy]}" time="2025-11-05T08:01:34Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-78pq2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:01:14Z lastTimestamp:2025-11-05T08:01:34Z reason:Unhealthy]}" I1105 08:01:42.715349 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:01:44Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-78pq2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:01:14Z lastTimestamp:2025-11-05T08:01:44Z reason:Unhealthy]}" time="2025-11-05T08:01:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-pkq5d]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:01:13Z lastTimestamp:2025-11-05T08:01:53Z reason:Unhealthy]}" time="2025-11-05T08:01:54Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-78pq2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:01:14Z lastTimestamp:2025-11-05T08:01:54Z reason:Unhealthy]}" time="2025-11-05T08:02:04Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-78pq2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T08:01:14Z lastTimestamp:2025-11-05T08:02:04Z reason:Unhealthy]}" time="2025-11-05T08:02:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-pkq5d]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:01:13Z lastTimestamp:2025-11-05T08:02:13Z reason:Unhealthy]}" time="2025-11-05T08:02:14Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-78pq2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T08:01:14Z lastTimestamp:2025-11-05T08:02:14Z reason:Unhealthy]}" time="2025-11-05T08:02:24Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-78pq2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T08:01:14Z lastTimestamp:2025-11-05T08:02:24Z reason:Unhealthy]}" time="2025-11-05T08:02:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-pkq5d]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:01:13Z lastTimestamp:2025-11-05T08:02:33Z reason:Unhealthy]}" I1105 08:02:42.981503 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:02:53Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-pkq5d]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T08:01:13Z lastTimestamp:2025-11-05T08:02:53Z reason:Unhealthy]}" time="2025-11-05T08:03:13Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-pkq5d]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T08:01:13Z lastTimestamp:2025-11-05T08:03:13Z reason:Unhealthy]}" time="2025-11-05T08:03:33Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-pkq5d]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T08:01:13Z lastTimestamp:2025-11-05T08:03:33Z reason:Unhealthy]}" I1105 08:03:43.299248 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStarted - ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 - 2025-11-05T08:03:55Z Watch received OS update event: OSUpdateStaged - ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 - 2025-11-05T08:04:34Z I1105 08:04:43.547307 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:05:23Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-4k6zx]}" message="{NodeNotReady Node is not ready map[count:3 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T08:05:23Z reason:NodeNotReady]}" time="2025-11-05T08:05:23Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:21 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T08:05:23Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T08:05:24Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:22 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T08:05:24Z reason:TopologyAwareHintsDisabled]}" I1105 08:05:43.815151 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:05:47Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T08:05:47Z lastTimestamp:2025-11-05T08:05:47Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:05:47Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T08:05:47Z lastTimestamp:2025-11-05T08:05:47Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:05:47Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T08:05:47Z lastTimestamp:2025-11-05T08:05:47Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:05:47Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T08:05:47Z lastTimestamp:2025-11-05T08:05:47Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:05:47Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T08:05:47Z lastTimestamp:2025-11-05T08:05:47Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:05:47Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T08:05:47Z lastTimestamp:2025-11-05T08:05:47Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:05:47Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T08:05:47Z lastTimestamp:2025-11-05T08:05:47Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:05:47Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T08:05:47Z lastTimestamp:2025-11-05T08:05:47Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:05:48Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T08:05:47Z lastTimestamp:2025-11-05T08:05:47Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:05:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T08:05:47Z lastTimestamp:2025-11-05T08:05:47Z reason:NetworkNotReady]}" time="2025-11-05T08:05:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:48Z reason:FailedMount]}" time="2025-11-05T08:05:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:48Z reason:FailedMount]}" time="2025-11-05T08:05:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:48Z reason:FailedMount]}" time="2025-11-05T08:05:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:48Z reason:FailedMount]}" time="2025-11-05T08:05:48Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:23 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T08:05:48Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T08:05:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:48Z reason:FailedMount]}" time="2025-11-05T08:05:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:48Z reason:FailedMount]}" time="2025-11-05T08:05:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:48Z reason:FailedMount]}" time="2025-11-05T08:05:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:48Z reason:FailedMount]}" time="2025-11-05T08:05:49Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T08:05:47Z lastTimestamp:2025-11-05T08:05:49Z reason:NetworkNotReady]}" time="2025-11-05T08:05:49Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:49Z reason:FailedMount]}" time="2025-11-05T08:05:49Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:49Z reason:FailedMount]}" time="2025-11-05T08:05:49Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:49Z reason:FailedMount]}" time="2025-11-05T08:05:49Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:49Z reason:FailedMount]}" time="2025-11-05T08:05:51Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T08:05:47Z lastTimestamp:2025-11-05T08:05:51Z reason:NetworkNotReady]}" time="2025-11-05T08:05:51Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:51Z reason:FailedMount]}" time="2025-11-05T08:05:51Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:51Z reason:FailedMount]}" time="2025-11-05T08:05:51Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:51Z reason:FailedMount]}" time="2025-11-05T08:05:51Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:51Z reason:FailedMount]}" time="2025-11-05T08:05:53Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T08:05:47Z lastTimestamp:2025-11-05T08:05:53Z reason:NetworkNotReady]}" time="2025-11-05T08:05:55Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T08:05:47Z lastTimestamp:2025-11-05T08:05:55Z reason:NetworkNotReady]}" time="2025-11-05T08:05:55Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:55Z reason:FailedMount]}" time="2025-11-05T08:05:55Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:55Z reason:FailedMount]}" time="2025-11-05T08:05:55Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:55Z reason:FailedMount]}" time="2025-11-05T08:05:55Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T08:05:48Z lastTimestamp:2025-11-05T08:05:55Z reason:FailedMount]}" time="2025-11-05T08:06:01Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:416a528720 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.3:10300/healthz\": dial tcp 10.0.128.3:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:06:01Z lastTimestamp:2025-11-05T08:06:01Z reason:ProbeError]}" time="2025-11-05T08:06:01Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:68683c9410 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.3:10300/healthz\": dial tcp 10.0.128.3:10300: connect: connection refused map[firstTimestamp:2025-11-05T08:06:01Z lastTimestamp:2025-11-05T08:06:01Z reason:Unhealthy]}" time="2025-11-05T08:06:03Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:064786e2fe namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.3:10303/healthz\": dial tcp 10.0.128.3:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:06:03Z lastTimestamp:2025-11-05T08:06:03Z reason:ProbeError]}" time="2025-11-05T08:06:03Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e172d2e44c namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.3:10303/healthz\": dial tcp 10.0.128.3:10303: connect: connection refused map[firstTimestamp:2025-11-05T08:06:03Z lastTimestamp:2025-11-05T08:06:03Z reason:Unhealthy]}" time="2025-11-05T08:06:04Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:cd29a577c1 namespace:openshift-e2e-loki pod:loki-promtail-4k6zx]}" message="{AddedInterface Add eth0 [10.131.0.3/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T08:06:04Z lastTimestamp:2025-11-05T08:06:04Z reason:AddedInterface]}" time="2025-11-05T08:06:04Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1769ebd414 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Container image \"quay.io/openshift-logging/promtail:v2.9.8\" already present on machine map[container:promtail firstTimestamp:2025-11-05T08:06:04Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T08:06:04Z reason:Pulled]}" time="2025-11-05T08:06:05Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T08:06:05Z lastTimestamp:2025-11-05T08:06:05Z reason:Created]}" time="2025-11-05T08:06:05Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T08:06:05Z lastTimestamp:2025-11-05T08:06:05Z reason:Started]}" time="2025-11-05T08:06:05Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ce1ec925c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Container image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" already present on machine map[container:oauth-proxy firstTimestamp:2025-11-05T08:06:05Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T08:06:05Z reason:Pulled]}" time="2025-11-05T08:06:06Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T08:06:06Z lastTimestamp:2025-11-05T08:06:06Z reason:Created]}" time="2025-11-05T08:06:06Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T08:06:06Z lastTimestamp:2025-11-05T08:06:06Z reason:Started]}" time="2025-11-05T08:06:06Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T08:06:06Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T08:06:06Z reason:Pulling]}" time="2025-11-05T08:06:07Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:27af467a51 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 634ms (634ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T08:06:07Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T08:06:07Z reason:Pulled]}" time="2025-11-05T08:06:07Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T08:06:07Z lastTimestamp:2025-11-05T08:06:07Z reason:Created]}" time="2025-11-05T08:06:07Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T08:06:07Z lastTimestamp:2025-11-05T08:06:07Z reason:Started]}" time="2025-11-05T08:06:25Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:0dc076f2c2 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:monitoring-plugin-79f9bc6c-vdvqf]}" message="{ProbeError Readiness probe error: Get \"https://10.131.0.9:9443/health\": dial tcp 10.131.0.9:9443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:06:25Z lastTimestamp:2025-11-05T08:06:25Z reason:ProbeError]}" time="2025-11-05T08:06:25Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:86e315bccb namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:monitoring-plugin-79f9bc6c-vdvqf]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.0.9:9443/health\": dial tcp 10.131.0.9:9443: connect: connection refused map[firstTimestamp:2025-11-05T08:06:25Z lastTimestamp:2025-11-05T08:06:25Z reason:Unhealthy]}" time="2025-11-05T08:06:26Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T08:06:26Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:58a9ac7147 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:prometheus-operator-admission-webhook-678bdc6597-627kw]}" message="{ProbeError Readiness probe error: Get \"https://10.131.0.10:8443/healthz\": dial tcp 10.131.0.10:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:06:26Z lastTimestamp:2025-11-05T08:06:26Z reason:ProbeError]}" time="2025-11-05T08:06:26Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:fdac00311f namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:prometheus-operator-admission-webhook-678bdc6597-627kw]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.0.10:8443/healthz\": dial tcp 10.131.0.10:8443: connect: connection refused map[firstTimestamp:2025-11-05T08:06:26Z lastTimestamp:2025-11-05T08:06:26Z reason:Unhealthy]}" time="2025-11-05T08:06:31Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-m6f8d]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:06:31Z lastTimestamp:2025-11-05T08:06:31Z reason:Unhealthy]}" time="2025-11-05T08:06:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-gv6kv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:06:32Z lastTimestamp:2025-11-05T08:06:32Z reason:Unhealthy]}" time="2025-11-05T08:06:41Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-m6f8d]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:06:31Z lastTimestamp:2025-11-05T08:06:41Z reason:Unhealthy]}" I1105 08:06:44.100516 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:06:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-m6f8d]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:06:31Z lastTimestamp:2025-11-05T08:06:51Z reason:Unhealthy]}" time="2025-11-05T08:06:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-gv6kv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:06:32Z lastTimestamp:2025-11-05T08:06:52Z reason:Unhealthy]}" time="2025-11-05T08:07:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-m6f8d]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:06:31Z lastTimestamp:2025-11-05T08:07:01Z reason:Unhealthy]}" time="2025-11-05T08:07:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-m6f8d]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:06:31Z lastTimestamp:2025-11-05T08:07:11Z reason:Unhealthy]}" time="2025-11-05T08:07:12Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-gv6kv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:06:32Z lastTimestamp:2025-11-05T08:07:12Z reason:Unhealthy]}" time="2025-11-05T08:07:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-m6f8d]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T08:06:31Z lastTimestamp:2025-11-05T08:07:21Z reason:Unhealthy]}" time="2025-11-05T08:07:31Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-m6f8d]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T08:06:31Z lastTimestamp:2025-11-05T08:07:31Z reason:Unhealthy]}" time="2025-11-05T08:07:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-gv6kv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:06:32Z lastTimestamp:2025-11-05T08:07:32Z reason:Unhealthy]}" I1105 08:07:44.356947 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:07:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-gv6kv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:06:32Z lastTimestamp:2025-11-05T08:07:52Z reason:Unhealthy]}" time="2025-11-05T08:08:12Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-gv6kv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T08:06:32Z lastTimestamp:2025-11-05T08:08:12Z reason:Unhealthy]}" time="2025-11-05T08:08:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-gv6kv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T08:06:32Z lastTimestamp:2025-11-05T08:08:32Z reason:Unhealthy]}" I1105 08:08:44.593737 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:08:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-gv6kv]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T08:06:32Z lastTimestamp:2025-11-05T08:08:52Z reason:Unhealthy]}" Watch received OS update event: OSUpdateStarted - ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt - 2025-11-05T08:09:07Z I1105 08:09:44.855131 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStaged - ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt - 2025-11-05T08:09:50Z time="2025-11-05T08:10:39Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T08:10:39Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T08:10:40Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:25 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T08:10:40Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T08:10:40Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-kchg8]}" message="{NodeNotReady Node is not ready map[count:3 firstTimestamp:2025-11-05T07:29:29Z lastTimestamp:2025-11-05T08:10:40Z reason:NodeNotReady]}" I1105 08:10:45.116137 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:10:59Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:f7fa0ea27b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T08:10:59Z lastTimestamp:2025-11-05T08:10:59Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:10:59Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:3a3c4cf390 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T08:10:59Z lastTimestamp:2025-11-05T08:10:59Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:10:59Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:506d7f331d node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T08:10:59Z lastTimestamp:2025-11-05T08:10:59Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:10:59Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:f7fa0ea27b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T08:10:59Z lastTimestamp:2025-11-05T08:10:59Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:10:59Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:3a3c4cf390 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T08:10:59Z lastTimestamp:2025-11-05T08:10:59Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:10:59Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:506d7f331d node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T08:10:59Z lastTimestamp:2025-11-05T08:10:59Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:10:59Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:f7fa0ea27b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T08:10:59Z lastTimestamp:2025-11-05T08:10:59Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:11:00Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:3a3c4cf390 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T08:10:59Z lastTimestamp:2025-11-05T08:10:59Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:11:00Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:506d7f331d node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T08:10:59Z lastTimestamp:2025-11-05T08:10:59Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:11:00Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:00Z reason:NetworkNotReady]}" time="2025-11-05T08:11:00Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:00Z reason:FailedMount]}" time="2025-11-05T08:11:00Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:00Z reason:FailedMount]}" time="2025-11-05T08:11:00Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:00Z reason:FailedMount]}" time="2025-11-05T08:11:00Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:00Z reason:FailedMount]}" time="2025-11-05T08:11:00Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:00Z reason:FailedMount]}" time="2025-11-05T08:11:00Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:00Z reason:FailedMount]}" time="2025-11-05T08:11:01Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:00Z reason:FailedMount]}" time="2025-11-05T08:11:01Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:00Z reason:FailedMount]}" time="2025-11-05T08:11:01Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:01Z reason:FailedMount]}" time="2025-11-05T08:11:01Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:01Z reason:FailedMount]}" time="2025-11-05T08:11:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:01Z reason:FailedMount]}" time="2025-11-05T08:11:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:02Z reason:FailedMount]}" time="2025-11-05T08:11:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:02Z reason:NetworkNotReady]}" time="2025-11-05T08:11:03Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:03Z reason:FailedMount]}" time="2025-11-05T08:11:03Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:03Z reason:FailedMount]}" time="2025-11-05T08:11:04Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:03Z reason:FailedMount]}" time="2025-11-05T08:11:04Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:04Z reason:FailedMount]}" time="2025-11-05T08:11:04Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:04Z reason:NetworkNotReady]}" time="2025-11-05T08:11:06Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:06Z reason:NetworkNotReady]}" time="2025-11-05T08:11:08Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:07Z reason:FailedMount]}" time="2025-11-05T08:11:08Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:07Z reason:FailedMount]}" time="2025-11-05T08:11:08Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:07Z reason:FailedMount]}" time="2025-11-05T08:11:08Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:08Z reason:FailedMount]}" time="2025-11-05T08:11:08Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T08:11:00Z lastTimestamp:2025-11-05T08:11:08Z reason:NetworkNotReady]}" time="2025-11-05T08:11:14Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:ff804f9505 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:gcp-pd-csi-driver-node-42zwr]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.4:10303/healthz\": dial tcp 10.0.128.4:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:11:14Z lastTimestamp:2025-11-05T08:11:14Z reason:ProbeError]}" time="2025-11-05T08:11:14Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:c97f0f2313 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:gcp-pd-csi-driver-node-42zwr]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.4:10303/healthz\": dial tcp 10.0.128.4:10303: connect: connection refused map[firstTimestamp:2025-11-05T08:11:14Z lastTimestamp:2025-11-05T08:11:14Z reason:Unhealthy]}" time="2025-11-05T08:11:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:9fc061b6e6 namespace:openshift-e2e-loki pod:loki-promtail-kchg8]}" message="{AddedInterface Add eth0 [10.128.2.4/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T08:11:16Z lastTimestamp:2025-11-05T08:11:16Z reason:AddedInterface]}" time="2025-11-05T08:11:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1769ebd414 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulled Container image \"quay.io/openshift-logging/promtail:v2.9.8\" already present on machine map[container:promtail firstTimestamp:2025-11-05T08:11:16Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T08:11:16Z reason:Pulled]}" time="2025-11-05T08:11:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T08:11:18Z lastTimestamp:2025-11-05T08:11:18Z reason:Created]}" time="2025-11-05T08:11:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T08:11:18Z lastTimestamp:2025-11-05T08:11:18Z reason:Started]}" time="2025-11-05T08:11:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ce1ec925c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulled Container image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" already present on machine map[container:oauth-proxy firstTimestamp:2025-11-05T08:11:18Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T08:11:18Z reason:Pulled]}" time="2025-11-05T08:11:18Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:574c5d057e namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:gcp-pd-csi-driver-node-42zwr]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.4:10300/healthz\": dial tcp 10.0.128.4:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:11:18Z lastTimestamp:2025-11-05T08:11:18Z reason:ProbeError]}" time="2025-11-05T08:11:18Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d312da0f65 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:gcp-pd-csi-driver-node-42zwr]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.4:10300/healthz\": dial tcp 10.0.128.4:10300: connect: connection refused map[firstTimestamp:2025-11-05T08:11:18Z lastTimestamp:2025-11-05T08:11:18Z reason:Unhealthy]}" time="2025-11-05T08:11:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T08:11:18Z lastTimestamp:2025-11-05T08:11:18Z reason:Created]}" time="2025-11-05T08:11:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T08:11:18Z lastTimestamp:2025-11-05T08:11:18Z reason:Started]}" time="2025-11-05T08:11:19Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T08:11:18Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T08:11:18Z reason:Pulling]}" time="2025-11-05T08:11:19Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:55e0700372 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 666ms (666ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T08:11:19Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T08:11:19Z reason:Pulled]}" time="2025-11-05T08:11:19Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T08:11:19Z lastTimestamp:2025-11-05T08:11:19Z reason:Created]}" time="2025-11-05T08:11:19Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T08:11:19Z lastTimestamp:2025-11-05T08:11:19Z reason:Started]}" I1105 08:11:45.370111 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:11:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-zmbnb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:11:45Z lastTimestamp:2025-11-05T08:11:45Z reason:Unhealthy]}" time="2025-11-05T08:11:55Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-zmbnb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:11:45Z lastTimestamp:2025-11-05T08:11:55Z reason:Unhealthy]}" time="2025-11-05T08:11:56Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-g4lwf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:11:56Z lastTimestamp:2025-11-05T08:11:56Z reason:Unhealthy]}" time="2025-11-05T08:12:05Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-zmbnb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:11:45Z lastTimestamp:2025-11-05T08:12:05Z reason:Unhealthy]}" time="2025-11-05T08:12:15Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-0]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T08:12:15Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-zmbnb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:11:45Z lastTimestamp:2025-11-05T08:12:15Z reason:Unhealthy]}" time="2025-11-05T08:12:16Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-g4lwf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:11:56Z lastTimestamp:2025-11-05T08:12:16Z reason:Unhealthy]}" time="2025-11-05T08:12:25Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-zmbnb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:11:45Z lastTimestamp:2025-11-05T08:12:25Z reason:Unhealthy]}" time="2025-11-05T08:12:35Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-zmbnb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T08:11:45Z lastTimestamp:2025-11-05T08:12:35Z reason:Unhealthy]}" time="2025-11-05T08:12:36Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-g4lwf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:11:56Z lastTimestamp:2025-11-05T08:12:36Z reason:Unhealthy]}" time="2025-11-05T08:12:45Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-zmbnb]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T08:11:45Z lastTimestamp:2025-11-05T08:12:45Z reason:Unhealthy]}" I1105 08:12:45.668242 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:12:56Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-g4lwf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:11:56Z lastTimestamp:2025-11-05T08:12:56Z reason:Unhealthy]}" time="2025-11-05T08:13:16Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-g4lwf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:11:56Z lastTimestamp:2025-11-05T08:13:16Z reason:Unhealthy]}" time="2025-11-05T08:13:36Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-g4lwf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T08:11:56Z lastTimestamp:2025-11-05T08:13:36Z reason:Unhealthy]}" I1105 08:13:45.938118 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:13:56Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-g4lwf]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T08:11:56Z lastTimestamp:2025-11-05T08:13:56Z reason:Unhealthy]}" Watch received OS update event: OSUpdateStarted - ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr - 2025-11-05T08:14:20Z I1105 08:14:46.175741 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStaged - ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr - 2025-11-05T08:15:03Z I1105 08:15:46.419999 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:15:50Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-0]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T08:15:50Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-0]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T08:15:50Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:29 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T08:15:50Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T08:15:50Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-tqnvt]}" message="{NodeNotReady Node is not ready map[count:3 firstTimestamp:2025-11-05T07:34:54Z lastTimestamp:2025-11-05T08:15:50Z reason:NodeNotReady]}" time="2025-11-05T08:16:22Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:4a36419b2b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T08:16:22Z lastTimestamp:2025-11-05T08:16:22Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:16:22Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:7af51874d8 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T08:16:22Z lastTimestamp:2025-11-05T08:16:22Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:16:22Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:be149cb561 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T08:16:22Z lastTimestamp:2025-11-05T08:16:22Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:16:22Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:4a36419b2b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T08:16:22Z lastTimestamp:2025-11-05T08:16:22Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:16:22Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:7af51874d8 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T08:16:22Z lastTimestamp:2025-11-05T08:16:22Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:16:22Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:be149cb561 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T08:16:22Z lastTimestamp:2025-11-05T08:16:22Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:16:23Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:4a36419b2b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T08:16:22Z lastTimestamp:2025-11-05T08:16:22Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:16:23Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:7af51874d8 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T08:16:22Z lastTimestamp:2025-11-05T08:16:22Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:16:23Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:be149cb561 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T08:16:22Z lastTimestamp:2025-11-05T08:16:22Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:16:24Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:4a36419b2b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientMemory map[count:4 firstTimestamp:2025-11-05T08:16:22Z lastTimestamp:2025-11-05T08:16:22Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:16:24Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:7af51874d8 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasNoDiskPressure map[count:4 firstTimestamp:2025-11-05T08:16:22Z lastTimestamp:2025-11-05T08:16:22Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:16:24Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:be149cb561 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientPID map[count:4 firstTimestamp:2025-11-05T08:16:22Z lastTimestamp:2025-11-05T08:16:22Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:16:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:23Z reason:NetworkNotReady]}" time="2025-11-05T08:16:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:23Z reason:FailedMount]}" time="2025-11-05T08:16:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:23Z reason:FailedMount]}" time="2025-11-05T08:16:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:23Z reason:FailedMount]}" time="2025-11-05T08:16:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:23Z reason:FailedMount]}" time="2025-11-05T08:16:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:24Z reason:FailedMount]}" time="2025-11-05T08:16:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:24Z reason:FailedMount]}" time="2025-11-05T08:16:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:24Z reason:FailedMount]}" time="2025-11-05T08:16:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:24Z reason:FailedMount]}" time="2025-11-05T08:16:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:24Z reason:NetworkNotReady]}" time="2025-11-05T08:16:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:25Z reason:FailedMount]}" time="2025-11-05T08:16:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:25Z reason:FailedMount]}" time="2025-11-05T08:16:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:25Z reason:FailedMount]}" time="2025-11-05T08:16:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:25Z reason:FailedMount]}" time="2025-11-05T08:16:26Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:26Z reason:NetworkNotReady]}" time="2025-11-05T08:16:27Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:27Z reason:FailedMount]}" time="2025-11-05T08:16:27Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:27Z reason:FailedMount]}" time="2025-11-05T08:16:27Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:27Z reason:FailedMount]}" time="2025-11-05T08:16:27Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:27Z reason:FailedMount]}" time="2025-11-05T08:16:28Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:28Z reason:NetworkNotReady]}" time="2025-11-05T08:16:30Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:30Z reason:NetworkNotReady]}" time="2025-11-05T08:16:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:31Z reason:FailedMount]}" time="2025-11-05T08:16:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:31Z reason:FailedMount]}" time="2025-11-05T08:16:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:31Z reason:FailedMount]}" time="2025-11-05T08:16:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T08:16:23Z lastTimestamp:2025-11-05T08:16:31Z reason:FailedMount]}" time="2025-11-05T08:16:35Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:0c8059276e namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.2:10303/healthz\": dial tcp 10.0.128.2:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:16:35Z lastTimestamp:2025-11-05T08:16:35Z reason:ProbeError]}" time="2025-11-05T08:16:35Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:b77166b047 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.2:10303/healthz\": dial tcp 10.0.128.2:10303: connect: connection refused map[firstTimestamp:2025-11-05T08:16:35Z lastTimestamp:2025-11-05T08:16:35Z reason:Unhealthy]}" time="2025-11-05T08:16:39Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:77b1142bbf namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.2:10300/healthz\": dial tcp 10.0.128.2:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:16:39Z lastTimestamp:2025-11-05T08:16:39Z reason:ProbeError]}" time="2025-11-05T08:16:39Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:ce75dd64b5 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.2:10300/healthz\": dial tcp 10.0.128.2:10300: connect: connection refused map[firstTimestamp:2025-11-05T08:16:39Z lastTimestamp:2025-11-05T08:16:39Z reason:Unhealthy]}" time="2025-11-05T08:16:40Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:a942ede634 namespace:openshift-e2e-loki pod:loki-promtail-tqnvt]}" message="{AddedInterface Add eth0 [10.129.2.4/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T08:16:40Z lastTimestamp:2025-11-05T08:16:40Z reason:AddedInterface]}" time="2025-11-05T08:16:40Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1769ebd414 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulled Container image \"quay.io/openshift-logging/promtail:v2.9.8\" already present on machine map[container:promtail firstTimestamp:2025-11-05T08:16:40Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T08:16:40Z reason:Pulled]}" time="2025-11-05T08:16:41Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T08:16:41Z lastTimestamp:2025-11-05T08:16:41Z reason:Created]}" time="2025-11-05T08:16:41Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T08:16:41Z lastTimestamp:2025-11-05T08:16:41Z reason:Started]}" time="2025-11-05T08:16:41Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ce1ec925c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulled Container image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" already present on machine map[container:oauth-proxy firstTimestamp:2025-11-05T08:16:41Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T08:16:41Z reason:Pulled]}" time="2025-11-05T08:16:42Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T08:16:42Z lastTimestamp:2025-11-05T08:16:42Z reason:Created]}" time="2025-11-05T08:16:42Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T08:16:42Z lastTimestamp:2025-11-05T08:16:42Z reason:Started]}" time="2025-11-05T08:16:42Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T08:16:42Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T08:16:42Z reason:Pulling]}" time="2025-11-05T08:16:43Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:8b845110e6 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 719ms (719ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T08:16:43Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T08:16:43Z reason:Pulled]}" time="2025-11-05T08:16:43Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T08:16:43Z lastTimestamp:2025-11-05T08:16:43Z reason:Created]}" time="2025-11-05T08:16:43Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T08:16:43Z lastTimestamp:2025-11-05T08:16:43Z reason:Started]}" I1105 08:16:46.669033 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 08:17:46.947797 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 08:18:47.182115 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:19:24Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" I1105 08:19:47.418249 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 08:20:47.686813 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 08:21:47.943143 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 08:22:48.352258 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' I1105 08:23:48.691530 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:24:18Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:5d8b1c87ca namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:thanos-querier-8649978c8-qb4m9]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 502 map[firstTimestamp:2025-11-05T08:24:17Z lastTimestamp:2025-11-05T08:24:17Z reason:Unhealthy]}" time="2025-11-05T08:24:18Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-8rllr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:24:18Z lastTimestamp:2025-11-05T08:24:18Z reason:Unhealthy]}" time="2025-11-05T08:24:19Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:c992be81fd namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:monitoring-plugin-79f9bc6c-bt7j7]}" message="{ProbeError Readiness probe error: Get \"https://10.129.2.9:9443/health\": dial tcp 10.129.2.9:9443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:24:19Z lastTimestamp:2025-11-05T08:24:19Z reason:ProbeError]}" time="2025-11-05T08:24:19Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:7aa37568e2 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:monitoring-plugin-79f9bc6c-bt7j7]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.2.9:9443/health\": dial tcp 10.129.2.9:9443: connect: connection refused map[firstTimestamp:2025-11-05T08:24:19Z lastTimestamp:2025-11-05T08:24:19Z reason:Unhealthy]}" time="2025-11-05T08:24:20Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:b44f4a8781 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:prometheus-operator-admission-webhook-678bdc6597-tl44d]}" message="{ProbeError Readiness probe error: Get \"https://10.129.2.11:8443/healthz\": dial tcp 10.129.2.11:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:24:20Z lastTimestamp:2025-11-05T08:24:20Z reason:ProbeError]}" time="2025-11-05T08:24:20Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:23a6e31fc9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:prometheus-operator-admission-webhook-678bdc6597-tl44d]}" message="{Unhealthy Readiness probe failed: Get \"https://10.129.2.11:8443/healthz\": dial tcp 10.129.2.11:8443: connect: connection refused map[firstTimestamp:2025-11-05T08:24:20Z lastTimestamp:2025-11-05T08:24:20Z reason:Unhealthy]}" time="2025-11-05T08:24:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-8rllr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:24:18Z lastTimestamp:2025-11-05T08:24:28Z reason:Unhealthy]}" time="2025-11-05T08:24:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-kbb4p]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:24:28Z lastTimestamp:2025-11-05T08:24:28Z reason:Unhealthy]}" time="2025-11-05T08:24:38Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-8rllr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:24:18Z lastTimestamp:2025-11-05T08:24:38Z reason:Unhealthy]}" time="2025-11-05T08:24:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-8rllr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:24:18Z lastTimestamp:2025-11-05T08:24:48Z reason:Unhealthy]}" time="2025-11-05T08:24:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-kbb4p]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:24:28Z lastTimestamp:2025-11-05T08:24:48Z reason:Unhealthy]}" I1105 08:24:49.031131 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:24:58Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-8rllr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:24:18Z lastTimestamp:2025-11-05T08:24:58Z reason:Unhealthy]}" time="2025-11-05T08:25:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-8rllr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T08:24:18Z lastTimestamp:2025-11-05T08:25:08Z reason:Unhealthy]}" time="2025-11-05T08:25:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-kbb4p]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:24:28Z lastTimestamp:2025-11-05T08:25:08Z reason:Unhealthy]}" time="2025-11-05T08:25:18Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-8rllr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T08:24:18Z lastTimestamp:2025-11-05T08:25:18Z reason:Unhealthy]}" time="2025-11-05T08:25:24Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-0]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T08:25:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-8rllr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T08:24:18Z lastTimestamp:2025-11-05T08:25:28Z reason:Unhealthy]}" time="2025-11-05T08:25:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-kbb4p]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:24:28Z lastTimestamp:2025-11-05T08:25:28Z reason:Unhealthy]}" time="2025-11-05T08:25:48Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-kbb4p]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:24:28Z lastTimestamp:2025-11-05T08:25:48Z reason:Unhealthy]}" I1105 08:25:49.308406 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:26:08Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-kbb4p]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T08:24:28Z lastTimestamp:2025-11-05T08:26:08Z reason:Unhealthy]}" time="2025-11-05T08:26:28Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-kbb4p]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T08:24:28Z lastTimestamp:2025-11-05T08:26:28Z reason:Unhealthy]}" I1105 08:26:49.586381 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStarted - ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 - 2025-11-05T08:27:01Z Watch received OS update event: OSUpdateStaged - ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 - 2025-11-05T08:27:22Z I1105 08:27:49.879047 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:28:21Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-4k6zx]}" message="{NodeNotReady Node is not ready map[count:4 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T08:28:21Z reason:NodeNotReady]}" time="2025-11-05T08:28:22Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:33 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T08:28:22Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T08:28:23Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:34 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T08:28:23Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T08:28:29Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T08:28:29Z lastTimestamp:2025-11-05T08:28:29Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:28:29Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T08:28:29Z lastTimestamp:2025-11-05T08:28:29Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:28:29Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T08:28:29Z lastTimestamp:2025-11-05T08:28:29Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:28:30Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T08:28:29Z lastTimestamp:2025-11-05T08:28:30Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:28:30Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T08:28:29Z lastTimestamp:2025-11-05T08:28:30Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:28:30Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T08:28:29Z lastTimestamp:2025-11-05T08:28:30Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:28:30Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T08:28:29Z lastTimestamp:2025-11-05T08:28:30Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:28:30Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T08:28:29Z lastTimestamp:2025-11-05T08:28:30Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:28:30Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T08:28:29Z lastTimestamp:2025-11-05T08:28:30Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:28:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:30Z reason:NetworkNotReady]}" time="2025-11-05T08:28:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:30Z reason:FailedMount]}" time="2025-11-05T08:28:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:30Z reason:FailedMount]}" time="2025-11-05T08:28:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:30Z reason:FailedMount]}" time="2025-11-05T08:28:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:30Z reason:FailedMount]}" time="2025-11-05T08:28:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:31Z reason:FailedMount]}" time="2025-11-05T08:28:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:31Z reason:FailedMount]}" time="2025-11-05T08:28:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:31Z reason:FailedMount]}" time="2025-11-05T08:28:31Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:31Z reason:FailedMount]}" time="2025-11-05T08:28:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:32Z reason:FailedMount]}" time="2025-11-05T08:28:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:32Z reason:FailedMount]}" time="2025-11-05T08:28:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:32Z reason:FailedMount]}" time="2025-11-05T08:28:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:32Z reason:FailedMount]}" time="2025-11-05T08:28:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:32Z reason:NetworkNotReady]}" time="2025-11-05T08:28:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:34Z reason:FailedMount]}" time="2025-11-05T08:28:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:34Z reason:FailedMount]}" time="2025-11-05T08:28:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:34Z reason:FailedMount]}" time="2025-11-05T08:28:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:34Z reason:FailedMount]}" time="2025-11-05T08:28:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:34Z reason:NetworkNotReady]}" time="2025-11-05T08:28:36Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:36Z reason:NetworkNotReady]}" time="2025-11-05T08:28:38Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:38Z reason:FailedMount]}" time="2025-11-05T08:28:38Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:38Z reason:FailedMount]}" time="2025-11-05T08:28:38Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:38Z reason:FailedMount]}" time="2025-11-05T08:28:38Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:38Z reason:FailedMount]}" time="2025-11-05T08:28:38Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T08:28:30Z lastTimestamp:2025-11-05T08:28:38Z reason:NetworkNotReady]}" time="2025-11-05T08:28:44Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:416a528720 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.3:10300/healthz\": dial tcp 10.0.128.3:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:28:44Z lastTimestamp:2025-11-05T08:28:44Z reason:ProbeError]}" time="2025-11-05T08:28:44Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:68683c9410 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.3:10300/healthz\": dial tcp 10.0.128.3:10300: connect: connection refused map[firstTimestamp:2025-11-05T08:28:44Z lastTimestamp:2025-11-05T08:28:44Z reason:Unhealthy]}" time="2025-11-05T08:28:47Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:cd29a577c1 namespace:openshift-e2e-loki pod:loki-promtail-4k6zx]}" message="{AddedInterface Add eth0 [10.131.0.3/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T08:28:47Z lastTimestamp:2025-11-05T08:28:47Z reason:AddedInterface]}" time="2025-11-05T08:28:47Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:064786e2fe namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.3:10303/healthz\": dial tcp 10.0.128.3:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:28:47Z lastTimestamp:2025-11-05T08:28:47Z reason:ProbeError]}" time="2025-11-05T08:28:47Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e172d2e44c namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.3:10303/healthz\": dial tcp 10.0.128.3:10303: connect: connection refused map[firstTimestamp:2025-11-05T08:28:47Z lastTimestamp:2025-11-05T08:28:47Z reason:Unhealthy]}" time="2025-11-05T08:28:47Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1769ebd414 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Container image \"quay.io/openshift-logging/promtail:v2.9.8\" already present on machine map[container:promtail firstTimestamp:2025-11-05T08:28:47Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T08:28:47Z reason:Pulled]}" time="2025-11-05T08:28:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T08:28:48Z lastTimestamp:2025-11-05T08:28:48Z reason:Created]}" time="2025-11-05T08:28:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T08:28:48Z lastTimestamp:2025-11-05T08:28:48Z reason:Started]}" time="2025-11-05T08:28:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ce1ec925c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Container image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" already present on machine map[container:oauth-proxy firstTimestamp:2025-11-05T08:28:48Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T08:28:48Z reason:Pulled]}" time="2025-11-05T08:28:49Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T08:28:49Z lastTimestamp:2025-11-05T08:28:49Z reason:Created]}" time="2025-11-05T08:28:49Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T08:28:49Z lastTimestamp:2025-11-05T08:28:49Z reason:Started]}" time="2025-11-05T08:28:49Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T08:28:49Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T08:28:49Z reason:Pulling]}" time="2025-11-05T08:28:50Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:4393e26527 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 629ms (629ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T08:28:50Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T08:28:50Z reason:Pulled]}" time="2025-11-05T08:28:50Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T08:28:50Z lastTimestamp:2025-11-05T08:28:50Z reason:Created]}" time="2025-11-05T08:28:50Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T08:28:50Z lastTimestamp:2025-11-05T08:28:50Z reason:Started]}" I1105 08:28:50.137680 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:29:09Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:2910c037c4 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:prometheus-operator-admission-webhook-678bdc6597-4hqpf]}" message="{ProbeError Readiness probe error: Get \"https://10.131.0.18:8443/healthz\": dial tcp 10.131.0.18:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:29:09Z lastTimestamp:2025-11-05T08:29:09Z reason:ProbeError]}" time="2025-11-05T08:29:09Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:616760e29e namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:prometheus-operator-admission-webhook-678bdc6597-4hqpf]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.0.18:8443/healthz\": dial tcp 10.131.0.18:8443: connect: connection refused map[firstTimestamp:2025-11-05T08:29:09Z lastTimestamp:2025-11-05T08:29:09Z reason:Unhealthy]}" time="2025-11-05T08:29:09Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:888aee621e namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-mpqtl]}" message="{ProbeError Startup probe error: Get \"http://10.131.0.10:1936/healthz/ready\": dial tcp 10.131.0.10:1936: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:29:09Z lastTimestamp:2025-11-05T08:29:09Z reason:ProbeError]}" time="2025-11-05T08:29:09Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:85e7277d63 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-mpqtl]}" message="{Unhealthy Startup probe failed: Get \"http://10.131.0.10:1936/healthz/ready\": dial tcp 10.131.0.10:1936: connect: connection refused map[firstTimestamp:2025-11-05T08:29:09Z lastTimestamp:2025-11-05T08:29:09Z reason:Unhealthy]}" time="2025-11-05T08:29:10Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T08:29:12Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-m4qr5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:29:11Z lastTimestamp:2025-11-05T08:29:11Z reason:Unhealthy]}" time="2025-11-05T08:29:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-6p8k4]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:29:20Z lastTimestamp:2025-11-05T08:29:20Z reason:Unhealthy]}" time="2025-11-05T08:29:22Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-m4qr5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:29:11Z lastTimestamp:2025-11-05T08:29:21Z reason:Unhealthy]}" time="2025-11-05T08:29:32Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-m4qr5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:29:11Z lastTimestamp:2025-11-05T08:29:31Z reason:Unhealthy]}" time="2025-11-05T08:29:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-6p8k4]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:29:20Z lastTimestamp:2025-11-05T08:29:40Z reason:Unhealthy]}" time="2025-11-05T08:29:42Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-m4qr5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:29:11Z lastTimestamp:2025-11-05T08:29:41Z reason:Unhealthy]}" I1105 08:29:50.397141 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:29:52Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-m4qr5]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:29:11Z lastTimestamp:2025-11-05T08:29:51Z reason:Unhealthy]}" time="2025-11-05T08:30:00Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-6p8k4]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:29:20Z lastTimestamp:2025-11-05T08:30:00Z reason:Unhealthy]}" time="2025-11-05T08:30:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-6p8k4]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:29:20Z lastTimestamp:2025-11-05T08:30:20Z reason:Unhealthy]}" time="2025-11-05T08:30:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-6p8k4]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:29:20Z lastTimestamp:2025-11-05T08:30:40Z reason:Unhealthy]}" I1105 08:30:50.642614 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:31:00Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-6p8k4]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T08:29:20Z lastTimestamp:2025-11-05T08:31:00Z reason:Unhealthy]}" time="2025-11-05T08:31:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-6p8k4]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T08:29:20Z lastTimestamp:2025-11-05T08:31:20Z reason:Unhealthy]}" I1105 08:31:50.928037 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStarted - ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt - 2025-11-05T08:31:51Z Watch received OS update event: OSUpdateStaged - ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt - 2025-11-05T08:32:11Z I1105 08:32:51.181349 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:32:57Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T08:32:57Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T08:32:57Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:36 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T08:32:57Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T08:32:58Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-kchg8]}" message="{NodeNotReady Node is not ready map[count:4 firstTimestamp:2025-11-05T07:29:29Z lastTimestamp:2025-11-05T08:32:57Z reason:NodeNotReady]}" time="2025-11-05T08:33:20Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:f7fa0ea27b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T08:33:20Z lastTimestamp:2025-11-05T08:33:20Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:33:20Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:3a3c4cf390 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T08:33:20Z lastTimestamp:2025-11-05T08:33:20Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:33:20Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:506d7f331d node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T08:33:20Z lastTimestamp:2025-11-05T08:33:20Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:33:20Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:f7fa0ea27b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T08:33:20Z lastTimestamp:2025-11-05T08:33:20Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:33:20Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:3a3c4cf390 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T08:33:20Z lastTimestamp:2025-11-05T08:33:20Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:33:20Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:506d7f331d node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T08:33:20Z lastTimestamp:2025-11-05T08:33:20Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:33:20Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:f7fa0ea27b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T08:33:20Z lastTimestamp:2025-11-05T08:33:20Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:33:21Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:3a3c4cf390 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T08:33:20Z lastTimestamp:2025-11-05T08:33:20Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:33:21Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:506d7f331d node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T08:33:20Z lastTimestamp:2025-11-05T08:33:20Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:33:21Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:21Z reason:NetworkNotReady]}" time="2025-11-05T08:33:21Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:21Z reason:FailedMount]}" time="2025-11-05T08:33:21Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:21Z reason:FailedMount]}" time="2025-11-05T08:33:21Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:21Z reason:FailedMount]}" time="2025-11-05T08:33:21Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:21Z reason:FailedMount]}" time="2025-11-05T08:33:21Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:21Z reason:FailedMount]}" time="2025-11-05T08:33:21Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:21Z reason:FailedMount]}" time="2025-11-05T08:33:21Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:21Z reason:FailedMount]}" time="2025-11-05T08:33:21Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:21Z reason:FailedMount]}" time="2025-11-05T08:33:22Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:22Z reason:FailedMount]}" time="2025-11-05T08:33:22Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:22Z reason:FailedMount]}" time="2025-11-05T08:33:22Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:22Z reason:FailedMount]}" time="2025-11-05T08:33:23Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:22Z reason:FailedMount]}" time="2025-11-05T08:33:23Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:23Z reason:NetworkNotReady]}" time="2025-11-05T08:33:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:24Z reason:FailedMount]}" time="2025-11-05T08:33:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:24Z reason:FailedMount]}" time="2025-11-05T08:33:24Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:24Z reason:FailedMount]}" time="2025-11-05T08:33:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:24Z reason:FailedMount]}" time="2025-11-05T08:33:25Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:25Z reason:NetworkNotReady]}" time="2025-11-05T08:33:27Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:27Z reason:NetworkNotReady]}" time="2025-11-05T08:33:28Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:28Z reason:FailedMount]}" time="2025-11-05T08:33:28Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:28Z reason:FailedMount]}" time="2025-11-05T08:33:29Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:28Z reason:FailedMount]}" time="2025-11-05T08:33:29Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:767a594e23 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-kq7gh\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:28Z reason:FailedMount]}" time="2025-11-05T08:33:29Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T08:33:21Z lastTimestamp:2025-11-05T08:33:29Z reason:NetworkNotReady]}" time="2025-11-05T08:33:33Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:ff804f9505 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:gcp-pd-csi-driver-node-42zwr]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.4:10303/healthz\": dial tcp 10.0.128.4:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:33:33Z lastTimestamp:2025-11-05T08:33:33Z reason:ProbeError]}" time="2025-11-05T08:33:33Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:c97f0f2313 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:gcp-pd-csi-driver-node-42zwr]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.4:10303/healthz\": dial tcp 10.0.128.4:10303: connect: connection refused map[firstTimestamp:2025-11-05T08:33:33Z lastTimestamp:2025-11-05T08:33:33Z reason:Unhealthy]}" time="2025-11-05T08:33:34Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:574c5d057e namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:gcp-pd-csi-driver-node-42zwr]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.4:10300/healthz\": dial tcp 10.0.128.4:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:33:34Z lastTimestamp:2025-11-05T08:33:34Z reason:ProbeError]}" time="2025-11-05T08:33:34Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:d312da0f65 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:gcp-pd-csi-driver-node-42zwr]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.4:10300/healthz\": dial tcp 10.0.128.4:10300: connect: connection refused map[firstTimestamp:2025-11-05T08:33:34Z lastTimestamp:2025-11-05T08:33:34Z reason:Unhealthy]}" time="2025-11-05T08:33:37Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:9fc061b6e6 namespace:openshift-e2e-loki pod:loki-promtail-kchg8]}" message="{AddedInterface Add eth0 [10.128.2.4/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T08:33:37Z lastTimestamp:2025-11-05T08:33:37Z reason:AddedInterface]}" time="2025-11-05T08:33:37Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1769ebd414 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulled Container image \"quay.io/openshift-logging/promtail:v2.9.8\" already present on machine map[container:promtail firstTimestamp:2025-11-05T08:33:37Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T08:33:37Z reason:Pulled]}" time="2025-11-05T08:33:38Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T08:33:38Z lastTimestamp:2025-11-05T08:33:38Z reason:Created]}" time="2025-11-05T08:33:38Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T08:33:38Z lastTimestamp:2025-11-05T08:33:38Z reason:Started]}" time="2025-11-05T08:33:38Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ce1ec925c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulled Container image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" already present on machine map[container:oauth-proxy firstTimestamp:2025-11-05T08:33:38Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T08:33:38Z reason:Pulled]}" time="2025-11-05T08:33:39Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T08:33:39Z lastTimestamp:2025-11-05T08:33:39Z reason:Created]}" time="2025-11-05T08:33:39Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T08:33:39Z lastTimestamp:2025-11-05T08:33:39Z reason:Started]}" time="2025-11-05T08:33:39Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T08:33:39Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T08:33:39Z reason:Pulling]}" time="2025-11-05T08:33:39Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:2712a86b5e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 772ms (772ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T08:33:39Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T08:33:39Z reason:Pulled]}" time="2025-11-05T08:33:39Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T08:33:39Z lastTimestamp:2025-11-05T08:33:39Z reason:Created]}" time="2025-11-05T08:33:39Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:loki-promtail-kchg8]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T08:33:39Z lastTimestamp:2025-11-05T08:33:39Z reason:Started]}" I1105 08:33:51.465065 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:34:00Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-ksl4t]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:34:00Z lastTimestamp:2025-11-05T08:34:00Z reason:Unhealthy]}" time="2025-11-05T08:34:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-2smh4]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:34:01Z lastTimestamp:2025-11-05T08:34:01Z reason:Unhealthy]}" time="2025-11-05T08:34:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-2smh4]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:34:01Z lastTimestamp:2025-11-05T08:34:11Z reason:Unhealthy]}" time="2025-11-05T08:34:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-ksl4t]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:34:00Z lastTimestamp:2025-11-05T08:34:20Z reason:Unhealthy]}" time="2025-11-05T08:34:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-2smh4]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:34:01Z lastTimestamp:2025-11-05T08:34:21Z reason:Unhealthy]}" time="2025-11-05T08:34:31Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-2smh4]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:34:01Z lastTimestamp:2025-11-05T08:34:31Z reason:Unhealthy]}" time="2025-11-05T08:34:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-ksl4t]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:34:00Z lastTimestamp:2025-11-05T08:34:40Z reason:Unhealthy]}" time="2025-11-05T08:34:41Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-2smh4]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:34:01Z lastTimestamp:2025-11-05T08:34:41Z reason:Unhealthy]}" time="2025-11-05T08:34:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-2smh4]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T08:34:01Z lastTimestamp:2025-11-05T08:34:51Z reason:Unhealthy]}" I1105 08:34:51.732259 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:35:00Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-ksl4t]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:34:00Z lastTimestamp:2025-11-05T08:35:00Z reason:Unhealthy]}" time="2025-11-05T08:35:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-2smh4]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T08:34:01Z lastTimestamp:2025-11-05T08:35:01Z reason:Unhealthy]}" time="2025-11-05T08:35:04Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-0]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T08:35:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:router-default-5f49b749c7-2smh4]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T08:34:01Z lastTimestamp:2025-11-05T08:35:11Z reason:Unhealthy]}" time="2025-11-05T08:35:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-ksl4t]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:34:00Z lastTimestamp:2025-11-05T08:35:20Z reason:Unhealthy]}" time="2025-11-05T08:35:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-ksl4t]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T08:34:00Z lastTimestamp:2025-11-05T08:35:40Z reason:Unhealthy]}" I1105 08:35:52.024568 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:36:00Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-ksl4t]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T08:34:00Z lastTimestamp:2025-11-05T08:36:00Z reason:Unhealthy]}" time="2025-11-05T08:36:20Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:metrics-server-5b778f5ffb-ksl4t]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T08:34:00Z lastTimestamp:2025-11-05T08:36:20Z reason:Unhealthy]}" Watch received OS update event: OSUpdateStarted - ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr - 2025-11-05T08:36:41Z I1105 08:36:52.288257 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStaged - ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr - 2025-11-05T08:37:03Z {"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:169","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 4h0m0s timeout","severity":"error","time":"2025-11-05T08:37:40Z"} Requesting risk analysis for test failures in this job run from sippy: I1105 08:37:40.965014 39938 factory.go:195] Registered Plugin "containerd" I1105 08:37:41.001035 39938 i18n.go:119] Couldn't find the LC_ALL, LC_MESSAGES or LANG environment variables, defaulting to en_US time="2025-11-05T08:37:41Z" level=warning msg="ENABLE_STORAGE_GCE_PD_DRIVER is set, but is not supported" I1105 08:37:41.582408 39938 binary.go:77] Found 8499 test specs I1105 08:37:41.586423 39938 binary.go:94] 1049 test specs remain, after filtering out k8s openshift-tests v4.1.0-10286-gc82b843 time="2025-11-05T08:37:41Z" level=info msg="Scanning for test-failures-summary files in: /logs/artifacts/junit" time="2025-11-05T08:37:41Z" level=info msg="Found files: []" time="2025-11-05T08:37:41Z" level=info msg="Missing : test-failures-summary file(s), exiting" time="2025-11-05T08:37:48Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-0]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T08:37:48Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-tqnvt]}" message="{NodeNotReady Node is not ready map[count:4 firstTimestamp:2025-11-05T07:34:54Z lastTimestamp:2025-11-05T08:37:48Z reason:NodeNotReady]}" time="2025-11-05T08:37:48Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:38 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T08:37:48Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T08:37:48Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-0]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" I1105 08:37:52.566537 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:38:14Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:4a36419b2b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T08:38:14Z lastTimestamp:2025-11-05T08:38:14Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:38:15Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:7af51874d8 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T08:38:14Z lastTimestamp:2025-11-05T08:38:14Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:38:15Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:be149cb561 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T08:38:14Z lastTimestamp:2025-11-05T08:38:14Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:38:15Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:4a36419b2b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T08:38:14Z lastTimestamp:2025-11-05T08:38:15Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:38:15Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:7af51874d8 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T08:38:14Z lastTimestamp:2025-11-05T08:38:15Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:38:15Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:be149cb561 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T08:38:14Z lastTimestamp:2025-11-05T08:38:15Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:38:15Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:4a36419b2b node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T08:38:14Z lastTimestamp:2025-11-05T08:38:15Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:38:15Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:7af51874d8 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T08:38:14Z lastTimestamp:2025-11-05T08:38:15Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:38:15Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:be149cb561 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T08:38:14Z lastTimestamp:2025-11-05T08:38:15Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:38:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:15Z reason:NetworkNotReady]}" time="2025-11-05T08:38:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:15Z reason:FailedMount]}" time="2025-11-05T08:38:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:15Z reason:FailedMount]}" time="2025-11-05T08:38:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:15Z reason:FailedMount]}" time="2025-11-05T08:38:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:15Z reason:FailedMount]}" time="2025-11-05T08:38:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:16Z reason:FailedMount]}" time="2025-11-05T08:38:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:16Z reason:FailedMount]}" time="2025-11-05T08:38:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:16Z reason:FailedMount]}" time="2025-11-05T08:38:16Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:16Z reason:FailedMount]}" time="2025-11-05T08:38:17Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:17Z reason:FailedMount]}" time="2025-11-05T08:38:17Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:17Z reason:FailedMount]}" time="2025-11-05T08:38:17Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:17Z reason:FailedMount]}" time="2025-11-05T08:38:17Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:17Z reason:FailedMount]}" time="2025-11-05T08:38:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:17Z reason:NetworkNotReady]}" time="2025-11-05T08:38:19Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:19Z reason:FailedMount]}" time="2025-11-05T08:38:19Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:19Z reason:FailedMount]}" time="2025-11-05T08:38:19Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:19Z reason:FailedMount]}" time="2025-11-05T08:38:19Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:19Z reason:FailedMount]}" time="2025-11-05T08:38:20Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:19Z reason:NetworkNotReady]}" time="2025-11-05T08:38:21Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:21Z reason:NetworkNotReady]}" time="2025-11-05T08:38:23Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:23Z reason:FailedMount]}" time="2025-11-05T08:38:23Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:23Z reason:FailedMount]}" time="2025-11-05T08:38:23Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:23Z reason:FailedMount]}" time="2025-11-05T08:38:23Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a1af630b5 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-mpvvl\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:23Z reason:FailedMount]}" time="2025-11-05T08:38:23Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T08:38:15Z lastTimestamp:2025-11-05T08:38:23Z reason:NetworkNotReady]}" time="2025-11-05T08:38:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:a942ede634 namespace:openshift-e2e-loki pod:loki-promtail-tqnvt]}" message="{AddedInterface Add eth0 [10.129.2.4/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T08:38:32Z lastTimestamp:2025-11-05T08:38:32Z reason:AddedInterface]}" time="2025-11-05T08:38:32Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1769ebd414 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulled Container image \"quay.io/openshift-logging/promtail:v2.9.8\" already present on machine map[container:promtail firstTimestamp:2025-11-05T08:38:32Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T08:38:32Z reason:Pulled]}" time="2025-11-05T08:38:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T08:38:33Z lastTimestamp:2025-11-05T08:38:33Z reason:Created]}" time="2025-11-05T08:38:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T08:38:33Z lastTimestamp:2025-11-05T08:38:33Z reason:Started]}" time="2025-11-05T08:38:33Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ce1ec925c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulled Container image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" already present on machine map[container:oauth-proxy firstTimestamp:2025-11-05T08:38:33Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T08:38:33Z reason:Pulled]}" time="2025-11-05T08:38:34Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:77b1142bbf namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.2:10300/healthz\": dial tcp 10.0.128.2:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:38:34Z lastTimestamp:2025-11-05T08:38:34Z reason:ProbeError]}" time="2025-11-05T08:38:34Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:ce75dd64b5 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.2:10300/healthz\": dial tcp 10.0.128.2:10300: connect: connection refused map[firstTimestamp:2025-11-05T08:38:34Z lastTimestamp:2025-11-05T08:38:34Z reason:Unhealthy]}" time="2025-11-05T08:38:34Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:0c8059276e namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.2:10303/healthz\": dial tcp 10.0.128.2:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:38:34Z lastTimestamp:2025-11-05T08:38:34Z reason:ProbeError]}" time="2025-11-05T08:38:34Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:b77166b047 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:gcp-pd-csi-driver-node-fxgtb]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.2:10303/healthz\": dial tcp 10.0.128.2:10303: connect: connection refused map[firstTimestamp:2025-11-05T08:38:34Z lastTimestamp:2025-11-05T08:38:34Z reason:Unhealthy]}" time="2025-11-05T08:38:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T08:38:34Z lastTimestamp:2025-11-05T08:38:34Z reason:Created]}" time="2025-11-05T08:38:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T08:38:34Z lastTimestamp:2025-11-05T08:38:34Z reason:Started]}" time="2025-11-05T08:38:34Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T08:38:34Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T08:38:34Z reason:Pulling]}" time="2025-11-05T08:38:35Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:f8daa70970 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 785ms (785ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T08:38:35Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T08:38:35Z reason:Pulled]}" time="2025-11-05T08:38:35Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T08:38:35Z lastTimestamp:2025-11-05T08:38:35Z reason:Created]}" time="2025-11-05T08:38:35Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-c-nvbxr pod:loki-promtail-tqnvt]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T08:38:35Z lastTimestamp:2025-11-05T08:38:35Z reason:Started]}" I1105 08:38:52.824926 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:39:39Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:83768cdc76 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:4 firstTimestamp:2025-11-05T06:50:45Z lastTimestamp:2025-11-05T08:39:39Z reason:SetDesiredConfig]}" time="2025-11-05T08:39:50Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-p2bg6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:39:50Z lastTimestamp:2025-11-05T08:39:50Z reason:Unhealthy]}" time="2025-11-05T08:39:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-mpqtl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:39:51Z lastTimestamp:2025-11-05T08:39:51Z reason:Unhealthy]}" I1105 08:39:53.070202 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:40:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-mpqtl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:39:51Z lastTimestamp:2025-11-05T08:40:01Z reason:Unhealthy]}" time="2025-11-05T08:40:10Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-p2bg6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:39:50Z lastTimestamp:2025-11-05T08:40:10Z reason:Unhealthy]}" time="2025-11-05T08:40:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-mpqtl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:39:51Z lastTimestamp:2025-11-05T08:40:11Z reason:Unhealthy]}" time="2025-11-05T08:40:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-mpqtl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:39:51Z lastTimestamp:2025-11-05T08:40:21Z reason:Unhealthy]}" time="2025-11-05T08:40:30Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-p2bg6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:39:50Z lastTimestamp:2025-11-05T08:40:30Z reason:Unhealthy]}" time="2025-11-05T08:40:31Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-mpqtl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:39:51Z lastTimestamp:2025-11-05T08:40:31Z reason:Unhealthy]}" time="2025-11-05T08:40:41Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-mpqtl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T08:39:51Z lastTimestamp:2025-11-05T08:40:41Z reason:Unhealthy]}" time="2025-11-05T08:40:50Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-p2bg6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:39:50Z lastTimestamp:2025-11-05T08:40:50Z reason:Unhealthy]}" time="2025-11-05T08:40:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-mpqtl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T08:39:51Z lastTimestamp:2025-11-05T08:40:51Z reason:Unhealthy]}" I1105 08:40:53.325375 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:41:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-mpqtl]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T08:39:51Z lastTimestamp:2025-11-05T08:41:01Z reason:Unhealthy]}" time="2025-11-05T08:41:10Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-p2bg6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:39:50Z lastTimestamp:2025-11-05T08:41:10Z reason:Unhealthy]}" time="2025-11-05T08:41:30Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-p2bg6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T08:39:50Z lastTimestamp:2025-11-05T08:41:30Z reason:Unhealthy]}" time="2025-11-05T08:41:50Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-p2bg6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:7 firstTimestamp:2025-11-05T08:39:50Z lastTimestamp:2025-11-05T08:41:50Z reason:Unhealthy]}" I1105 08:41:53.569583 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:42:10Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:metrics-server-5b778f5ffb-p2bg6]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:8 firstTimestamp:2025-11-05T08:39:50Z lastTimestamp:2025-11-05T08:42:10Z reason:Unhealthy]}" Watch received OS update event: OSUpdateStarted - ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 - 2025-11-05T08:42:32Z I1105 08:42:53.825119 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' Watch received OS update event: OSUpdateStaged - ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 - 2025-11-05T08:43:19Z time="2025-11-05T08:43:24Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" I1105 08:43:54.131722 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:44:04Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:43c2c9078a namespace:openshift-e2e-loki pod:loki-promtail-4k6zx]}" message="{NodeNotReady Node is not ready map[count:5 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T08:44:04Z reason:NodeNotReady]}" time="2025-11-05T08:44:04Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:42 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T08:44:04Z reason:TopologyAwareHintsDisabled]}" I1105 08:44:54.369522 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:45:00Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[firstTimestamp:2025-11-05T08:45:00Z lastTimestamp:2025-11-05T08:45:00Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:45:00Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[firstTimestamp:2025-11-05T08:45:00Z lastTimestamp:2025-11-05T08:45:00Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:45:00Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[firstTimestamp:2025-11-05T08:45:00Z lastTimestamp:2025-11-05T08:45:00Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:45:00Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[count:2 firstTimestamp:2025-11-05T08:45:00Z lastTimestamp:2025-11-05T08:45:00Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:45:00Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[count:2 firstTimestamp:2025-11-05T08:45:00Z lastTimestamp:2025-11-05T08:45:00Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:45:00Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[count:2 firstTimestamp:2025-11-05T08:45:00Z lastTimestamp:2025-11-05T08:45:00Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:45:01Z" level=info msg="event interval matches NodeHasSufficientMemory" locator="{Node map[hmsg:3b12617dfd node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientMemory Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientMemory map[count:3 firstTimestamp:2025-11-05T08:45:00Z lastTimestamp:2025-11-05T08:45:00Z reason:NodeHasSufficientMemory roles:worker]}" time="2025-11-05T08:45:01Z" level=info msg="event interval matches NodeHasNoDiskPressure" locator="{Node map[hmsg:c43b2c3324 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasNoDiskPressure Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasNoDiskPressure map[count:3 firstTimestamp:2025-11-05T08:45:00Z lastTimestamp:2025-11-05T08:45:00Z reason:NodeHasNoDiskPressure roles:worker]}" time="2025-11-05T08:45:01Z" level=info msg="event interval matches NodeHasSufficientPID" locator="{Node map[hmsg:b78b055259 node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7]}" message="{NodeHasSufficientPID Node ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 status is now: NodeHasSufficientPID map[count:3 firstTimestamp:2025-11-05T08:45:00Z lastTimestamp:2025-11-05T08:45:00Z reason:NodeHasSufficientPID roles:worker]}" time="2025-11-05T08:45:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:01Z reason:NetworkNotReady]}" time="2025-11-05T08:45:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:01Z reason:FailedMount]}" time="2025-11-05T08:45:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:01Z reason:FailedMount]}" time="2025-11-05T08:45:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:01Z reason:FailedMount]}" time="2025-11-05T08:45:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:01Z reason:FailedMount]}" time="2025-11-05T08:45:02Z" level=info msg="event interval matches TopologyAwareHintsDisabledDuringTaintManagerTests" locator="{Kind map[hmsg:1de144762d namespace:openshift-dns service:dns-default]}" message="{TopologyAwareHintsDisabled Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 2 zones), addressType: IPv4 map[count:43 firstTimestamp:2025-11-05T07:23:58Z lastTimestamp:2025-11-05T08:45:02Z reason:TopologyAwareHintsDisabled]}" time="2025-11-05T08:45:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:2 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:02Z reason:FailedMount]}" time="2025-11-05T08:45:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:2 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:02Z reason:FailedMount]}" time="2025-11-05T08:45:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:2 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:02Z reason:FailedMount]}" time="2025-11-05T08:45:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:2 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:02Z reason:FailedMount]}" time="2025-11-05T08:45:02Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:2 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:02Z reason:NetworkNotReady]}" time="2025-11-05T08:45:03Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:3 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:03Z reason:FailedMount]}" time="2025-11-05T08:45:03Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:3 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:03Z reason:FailedMount]}" time="2025-11-05T08:45:03Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:3 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:03Z reason:FailedMount]}" time="2025-11-05T08:45:03Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:3 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:03Z reason:FailedMount]}" time="2025-11-05T08:45:04Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:3 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:04Z reason:NetworkNotReady]}" time="2025-11-05T08:45:05Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:4 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:05Z reason:FailedMount]}" time="2025-11-05T08:45:05Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:4 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:05Z reason:FailedMount]}" time="2025-11-05T08:45:05Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:4 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:05Z reason:FailedMount]}" time="2025-11-05T08:45:05Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:4 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:05Z reason:FailedMount]}" time="2025-11-05T08:45:06Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:4 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:06Z reason:NetworkNotReady]}" time="2025-11-05T08:45:08Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:c637a3ef66 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{NetworkNotReady network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? map[count:5 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:08Z reason:NetworkNotReady]}" time="2025-11-05T08:45:09Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:7a3fa10a89 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"cookie-secret\" : object \"openshift-e2e-loki\"/\"cookie-secret\" not registered map[count:5 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:09Z reason:FailedMount]}" time="2025-11-05T08:45:09Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:815027e72c namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"proxy-tls\" : object \"openshift-e2e-loki\"/\"proxy-tls\" not registered map[count:5 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:09Z reason:FailedMount]}" time="2025-11-05T08:45:09Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:701ffa2d87 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"config\" : object \"openshift-e2e-loki\"/\"loki-promtail\" not registered map[count:5 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:09Z reason:FailedMount]}" time="2025-11-05T08:45:09Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:32867844fe namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{FailedMount MountVolume.SetUp failed for volume \"kube-api-access-d9hwx\" : [object \"openshift-e2e-loki\"/\"kube-root-ca.crt\" not registered, object \"openshift-e2e-loki\"/\"openshift-service-ca.crt\" not registered] map[count:5 firstTimestamp:2025-11-05T08:45:01Z lastTimestamp:2025-11-05T08:45:09Z reason:FailedMount]}" time="2025-11-05T08:45:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:cd29a577c1 namespace:openshift-e2e-loki pod:loki-promtail-4k6zx]}" message="{AddedInterface Add eth0 [10.131.0.3/23] from ovn-kubernetes map[firstTimestamp:2025-11-05T08:45:18Z lastTimestamp:2025-11-05T08:45:18Z reason:AddedInterface]}" time="2025-11-05T08:45:18Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:1769ebd414 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Container image \"quay.io/openshift-logging/promtail:v2.9.8\" already present on machine map[container:promtail firstTimestamp:2025-11-05T08:45:18Z image:quay.io/openshift-logging/promtail:v2.9.8 lastTimestamp:2025-11-05T08:45:18Z reason:Pulled]}" time="2025-11-05T08:45:19Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:3a3cec1a05 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: promtail map[firstTimestamp:2025-11-05T08:45:19Z lastTimestamp:2025-11-05T08:45:19Z reason:Created]}" time="2025-11-05T08:45:19Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:25ecae0504 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container promtail map[firstTimestamp:2025-11-05T08:45:19Z lastTimestamp:2025-11-05T08:45:19Z reason:Started]}" time="2025-11-05T08:45:19Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:ce1ec925c4 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Container image \"registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest\" already present on machine map[container:oauth-proxy firstTimestamp:2025-11-05T08:45:19Z image:registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest lastTimestamp:2025-11-05T08:45:19Z reason:Pulled]}" time="2025-11-05T08:45:20Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:416a528720 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.3:10300/healthz\": dial tcp 10.0.128.3:10300: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:45:20Z lastTimestamp:2025-11-05T08:45:20Z reason:ProbeError]}" time="2025-11-05T08:45:20Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:68683c9410 namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.3:10300/healthz\": dial tcp 10.0.128.3:10300: connect: connection refused map[firstTimestamp:2025-11-05T08:45:20Z lastTimestamp:2025-11-05T08:45:20Z reason:Unhealthy]}" time="2025-11-05T08:45:20Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:5a92323102 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: oauth-proxy map[firstTimestamp:2025-11-05T08:45:20Z lastTimestamp:2025-11-05T08:45:20Z reason:Created]}" time="2025-11-05T08:45:20Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:064786e2fe namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{ProbeError Liveness probe error: Get \"http://10.0.128.3:10303/healthz\": dial tcp 10.0.128.3:10303: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:45:20Z lastTimestamp:2025-11-05T08:45:20Z reason:ProbeError]}" time="2025-11-05T08:45:20Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e172d2e44c namespace:openshift-cluster-csi-drivers node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:gcp-pd-csi-driver-node-qlqjs]}" message="{Unhealthy Liveness probe failed: Get \"http://10.0.128.3:10303/healthz\": dial tcp 10.0.128.3:10303: connect: connection refused map[firstTimestamp:2025-11-05T08:45:20Z lastTimestamp:2025-11-05T08:45:20Z reason:Unhealthy]}" time="2025-11-05T08:45:20Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:b014dc3b1e namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container oauth-proxy map[firstTimestamp:2025-11-05T08:45:20Z lastTimestamp:2025-11-05T08:45:20Z reason:Started]}" time="2025-11-05T08:45:20Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:788695b931 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulling Pulling image \"quay.io/observatorium/token-refresher\" map[container:prod-bearer-token firstTimestamp:2025-11-05T08:45:20Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T08:45:20Z reason:Pulling]}" time="2025-11-05T08:45:21Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:cafcad0b34 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Pulled Successfully pulled image \"quay.io/observatorium/token-refresher\" in 764ms (764ms including waiting). Image size: 9597573 bytes. map[container:prod-bearer-token firstTimestamp:2025-11-05T08:45:21Z image:quay.io/observatorium/token-refresher lastTimestamp:2025-11-05T08:45:21Z reason:Pulled]}" time="2025-11-05T08:45:21Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:19d90da327 namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Created Created container: prod-bearer-token map[firstTimestamp:2025-11-05T08:45:21Z lastTimestamp:2025-11-05T08:45:21Z reason:Created]}" time="2025-11-05T08:45:21Z" level=info msg="event interval matches E2ELoki" locator="{Kind map[hmsg:13d5c451aa namespace:openshift-e2e-loki node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:loki-promtail-4k6zx]}" message="{Started Started container prod-bearer-token map[firstTimestamp:2025-11-05T08:45:21Z lastTimestamp:2025-11-05T08:45:21Z reason:Started]}" time="2025-11-05T08:45:29Z" level=info msg="event interval matches SetDesiredConfigTooOften" locator="{Kind map[hmsg:66d66c84b6 machineconfigpool:worker namespace:openshift-machine-config-operator]}" message="{SetDesiredConfig Targeted node ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt to MachineConfig: rendered-worker-68e6c340dbef76691f081bbf7159850a map[count:3 firstTimestamp:2025-11-05T06:51:11Z lastTimestamp:2025-11-05T08:45:29Z reason:SetDesiredConfig]}" time="2025-11-05T08:45:40Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:6c12107a0e namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:prometheus-operator-admission-webhook-678bdc6597-xr9h8]}" message="{ProbeError Readiness probe error: Get \"https://10.131.0.8:8443/healthz\": dial tcp 10.131.0.8:8443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:45:40Z lastTimestamp:2025-11-05T08:45:40Z reason:ProbeError]}" time="2025-11-05T08:45:40Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e2b55a289f namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:prometheus-operator-admission-webhook-678bdc6597-xr9h8]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.0.8:8443/healthz\": dial tcp 10.131.0.8:8443: connect: connection refused map[firstTimestamp:2025-11-05T08:45:40Z lastTimestamp:2025-11-05T08:45:40Z reason:Unhealthy]}" time="2025-11-05T08:45:40Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:29cbaec9d9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:monitoring-plugin-79f9bc6c-29pf8]}" message="{ProbeError Readiness probe error: Get \"https://10.131.0.7:9443/health\": dial tcp 10.131.0.7:9443: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:45:40Z lastTimestamp:2025-11-05T08:45:40Z reason:ProbeError]}" time="2025-11-05T08:45:40Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:e428fabba0 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:monitoring-plugin-79f9bc6c-29pf8]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.0.7:9443/health\": dial tcp 10.131.0.7:9443: connect: connection refused map[firstTimestamp:2025-11-05T08:45:40Z lastTimestamp:2025-11-05T08:45:40Z reason:Unhealthy]}" time="2025-11-05T08:45:41Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fmpgr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:45:41Z reason:Unhealthy]}" time="2025-11-05T08:45:41Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-tz5z2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:45:41Z reason:Unhealthy]}" time="2025-11-05T08:45:41Z" level=info msg="event interval matches FailedSchedulingDuringNodeUpdate" locator="{Kind map[hmsg:f6a3758ccd namespace:openshift-monitoring pod:prometheus-k8s-1]}" message="{FailedScheduling 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. map[firstTimestamp:0001-01-01T00:00:00Z lastTimestamp:0001-01-01T00:00:00Z reason:FailedScheduling]}" time="2025-11-05T08:45:41Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:29cbaec9d9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:monitoring-plugin-79f9bc6c-29pf8]}" message="{ProbeError Readiness probe error: Get \"https://10.131.0.7:9443/health\": dial tcp 10.131.0.7:9443: connect: connection refused\nbody: \n map[count:2 firstTimestamp:2025-11-05T08:45:40Z lastTimestamp:2025-11-05T08:45:41Z reason:ProbeError]}" time="2025-11-05T08:45:41Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:e428fabba0 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:monitoring-plugin-79f9bc6c-29pf8]}" message="{Unhealthy Readiness probe failed: Get \"https://10.131.0.7:9443/health\": dial tcp 10.131.0.7:9443: connect: connection refused map[count:2 firstTimestamp:2025-11-05T08:45:40Z lastTimestamp:2025-11-05T08:45:41Z reason:Unhealthy]}" time="2025-11-05T08:45:41Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:888aee621e namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-tmxmx]}" message="{ProbeError Startup probe error: Get \"http://10.131.0.10:1936/healthz/ready\": dial tcp 10.131.0.10:1936: connect: connection refused\nbody: \n map[firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:45:41Z reason:ProbeError]}" time="2025-11-05T08:45:41Z" level=info msg="event interval matches ConnectionErrorDuringSingleNodeAPIServerTargetDown" locator="{Kind map[hmsg:85e7277d63 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-a-thbk7 pod:router-default-5f49b749c7-tmxmx]}" message="{Unhealthy Startup probe failed: Get \"http://10.131.0.10:1936/healthz/ready\": dial tcp 10.131.0.10:1936: connect: connection refused map[firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:45:41Z reason:Unhealthy]}" time="2025-11-05T08:45:51Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-tz5z2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:45:51Z reason:Unhealthy]}" I1105 08:45:54.761416 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:46:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fmpgr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:2 firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:46:01Z reason:Unhealthy]}" time="2025-11-05T08:46:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-tz5z2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:46:01Z reason:Unhealthy]}" time="2025-11-05T08:46:11Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-tz5z2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:46:11Z reason:Unhealthy]}" time="2025-11-05T08:46:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fmpgr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:3 firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:46:21Z reason:Unhealthy]}" time="2025-11-05T08:46:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-ingress node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:router-default-5f49b749c7-tz5z2]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:46:21Z reason:Unhealthy]}" time="2025-11-05T08:46:41Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fmpgr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:46:41Z reason:Unhealthy]}" I1105 08:46:55.012346 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:47:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fmpgr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:47:01Z reason:Unhealthy]}" time="2025-11-05T08:47:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fmpgr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:47:21Z reason:Unhealthy]}" {"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 10m0s grace period","severity":"error","time":"2025-11-05T08:47:40Z"} {"component":"entrypoint","error":"os: process already finished","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:269","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2025-11-05T08:47:40Z"} {"component":"entrypoint","error":"process timed out","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:84","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2025-11-05T08:47:40Z"} error: failed to execute wrapped command: exit status 127 INFO[2025-11-05T08:47:41Z] Step e2e-gcp-disruptive-openshift-e2e-test failed after 4h10m12s. INFO[2025-11-05T08:47:41Z] Step phase test failed after 4h10m12s. INFO[2025-11-05T08:47:41Z] Running multi-stage phase post INFO[2025-11-05T08:47:41Z] Signalling observer pod "e2e-gcp-disruptive-observers-resource-watch" to terminate... INFO[2025-11-05T08:47:41Z] Running step e2e-gcp-disruptive-gather-core-dump. INFO[2025-11-05T08:48:03Z] Step e2e-gcp-disruptive-gather-core-dump succeeded after 21s. INFO[2025-11-05T08:48:03Z] Running step e2e-gcp-disruptive-gather-gcp-console. INFO[2025-11-05T08:48:32Z] Step e2e-gcp-disruptive-gather-gcp-console succeeded after 28s. INFO[2025-11-05T08:48:32Z] Running step e2e-gcp-disruptive-gather-must-gather. INFO[2025-11-05T08:53:01Z] Step e2e-gcp-disruptive-gather-must-gather succeeded after 4m29s. INFO[2025-11-05T08:53:01Z] Running step e2e-gcp-disruptive-gather-extra. INFO[2025-11-05T08:53:27Z] Logs for container test in pod e2e-gcp-disruptive-gather-extra: INFO[2025-11-05T08:53:27Z] Gathering artifacts ... E1105 08:53:20.849934 28 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://api.ci-op-x0f88pwp-f3da4.XXXXXXXXXXXXXXXXXXXXXX:6443/api?timeout=5s\": context deadline exceeded" Unable to connect to the server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) {"component":"entrypoint","error":"wrapped process failed: exit status 1","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:84","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2025-11-05T08:53:26Z"} error: failed to execute wrapped command: exit status 1 INFO[2025-11-05T08:53:27Z] Step e2e-gcp-disruptive-gather-extra failed after 26s. INFO[2025-11-05T08:53:27Z] Running step e2e-gcp-disruptive-gather-audit-logs. INFO[2025-11-05T08:54:59Z] Step e2e-gcp-disruptive-gather-audit-logs succeeded after 1m32s. INFO[2025-11-05T08:54:59Z] Running step e2e-gcp-disruptive-ipi-deprovision-deprovision. INFO[2025-11-05T08:56:33Z] Step e2e-gcp-disruptive-observers-resource-watch succeeded after 5h11m55s. INFO[2025-11-05T09:03:11Z] Step e2e-gcp-disruptive-ipi-deprovision-deprovision succeeded after 8m11s. INFO[2025-11-05T09:03:11Z] Step phase post failed after 15m30s. INFO[2025-11-05T09:03:11Z] Releasing leases for test e2e-gcp-disruptive INFO[2025-11-05T09:03:12Z] Ran for 5h42m56s ERRO[2025-11-05T09:03:12Z] Some steps failed: ERRO[2025-11-05T09:03:12Z] * could not run steps: step e2e-gcp-disruptive failed: ["e2e-gcp-disruptive" test steps failed: "e2e-gcp-disruptive" pod "e2e-gcp-disruptive-openshift-e2e-test" failed: could not watch pod: the pod ci-op-x0f88pwp/e2e-gcp-disruptive-openshift-e2e-test failed after 4h10m11s (failed containers: test): ContainerFailed one or more containers exited Container test exited with code 127, reason Error --- iled: HTTP probe failed with statuscode: 500 map[count:4 firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:46:41Z reason:Unhealthy]}" I1105 08:46:55.012346 1669 client.go:1023] Running 'oc --kubeconfig=/tmp/kubeconfig-2093074633 adm upgrade status --details=all' time="2025-11-05T08:47:01Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fmpgr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:5 firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:47:01Z reason:Unhealthy]}" time="2025-11-05T08:47:21Z" level=info msg="event interval matches KubeletUnhealthyReadinessProbeFailed" locator="{Kind map[hmsg:36b79e79a9 namespace:openshift-monitoring node:ci-op-x0f88pwp-f3da4-d9fgd-worker-b-sn8dt pod:metrics-server-5b778f5ffb-fmpgr]}" message="{Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 map[count:6 firstTimestamp:2025-11-05T08:45:41Z lastTimestamp:2025-11-05T08:47:21Z reason:Unhealthy]}" {"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 10m0s grace period","severity":"error","time":"2025-11-05T08:47:40Z"} {"component":"entrypoint","error":"os: process already finished","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:269","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2025-11-05T08:47:40Z"} {"component":"entrypoint","error":"process timed out","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:84","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2025-11-05T08:47:40Z"} error: failed to execute wrapped command: exit status 127 --- Link to step on registry info site: https://steps.ci.openshift.org/reference/openshift-e2e-test Link to job on registry info site: https://steps.ci.openshift.org/job?org=openshift&repo=origin&branch=main&test=e2e-gcp-disruptive, "e2e-gcp-disruptive" post steps failed: "e2e-gcp-disruptive" pod "e2e-gcp-disruptive-gather-extra" failed: could not watch pod: the pod ci-op-x0f88pwp/e2e-gcp-disruptive-gather-extra failed after 25s (failed containers: test): ContainerFailed one or more containers exited Container test exited with code 1, reason Error --- Gathering artifacts ... E1105 08:53:20.849934 28 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://api.ci-op-x0f88pwp-f3da4.XXXXXXXXXXXXXXXXXXXXXX:6443/api?timeout=5s\": context deadline exceeded" Unable to connect to the server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) {"component":"entrypoint","error":"wrapped process failed: exit status 1","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:84","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2025-11-05T08:53:26Z"} error: failed to execute wrapped command: exit status 1 --- Link to step on registry info site: https://steps.ci.openshift.org/reference/gather-extra Link to job on registry info site: https://steps.ci.openshift.org/job?org=openshift&repo=origin&branch=main&test=e2e-gcp-disruptive] INFO[2025-11-05T09:03:12Z] Reporting job state 'failed' with reason 'executing_graph:step_failed:utilizing_lease:executing_test:executing_multi_stage_test'