SIG Failure Report

compute pull-kubevirt-e2e-k8s-1.35-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16911/pull-kubevirt-e2e-k8s-1.35-sig-compute-serial/2025871213372379136
Test Name Failure Message
[sig-compute] Infrastructure [rfe_id:4126][crit:medium][vendor:cnv-qe@redhat.com][level:component]Taints and toleration CriticalAddonsOnly taint set on a node [test_id:4134] kubevirt components on that node should not evict tests/infrastructure/taints-and-tolerations.go:104 Unexpected error: <*errors.StatusError | 0xc008d7fcc0>: rpc error: code = Unavailable desc = error reading from server: read tcp 127.0.0.1:60234->127.0.0.1:2379: read: connection reset by peer { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "rpc error: code = Unavailable desc = error reading from server: read tcp 127.0.0.1:60234->127.0.0.1:2379: read: connection reset by peer", Reason: "", Details: nil, Code: 500, }, } occurred tests/infrastructure/taints-and-tolerations.go:132
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16865/pull-kubevirt-e2e-k8s-1.35-sig-compute-serial/2026862415760592896
Test Name Failure Message
[sig-compute]VM Rollout Strategy When using the Stage rollout strategy [test_id:11207]should set RestartRequired when changing any spec field tests/tests_suite_test.go:109 Timed out after 300.001s. One of the Kubevirt control-plane components is not ready. The function passed to Eventually failed at tests/testsuite/fixture.go:193 with: Unexpected error: <*url.Error | 0xc00644d650>: Get "https://127.0.0.1:41963/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt": dial tcp 127.0.0.1:41963: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:41963/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt", Err: <*net.OpError | 0xc003dfb130>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00694f2c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 41963, Zone: "", }, Err: <*os.SyscallError | 0xc00743b620>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred At one point, however, the function did return successfully. Yet, Eventually failed because the matcher was not satisfied: Expected <*v1.KubeVirt | 0xc007222008>: { TypeMeta: { Kind: "KubeVirt", APIVersion: "kubevirt.io/v1", }, ObjectMeta: { Name: "kubevirt", GenerateName: "", Namespace: "kubevirt", SelfLink: "", UID: "67e46c03-50d1-4aca-afa4-a8b0d9b825eb", ResourceVersion: "80504", Generation: 136, CreationTimestamp: { Time: 2026-02-26T03:47:32Z, }, DeletionTimestamp: nil, DeletionGracePeriodSeconds: nil, Labels: nil, Annotations: { "kubevirt.io/storage-observed-api-version": "v1", "kubevirt.io/latest-observed-api-version": "v1", }, OwnerReferences: nil, Finalizers: [ "foregroundDeleteKubeVirt", ], ManagedFields: [ { Manager: "kubectl-create", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-26T03:47:32Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:spec\":{\".\":{},\"f:certificateRotateStrategy\":{},\"f:configuration\":{},\"f:customizeComponents\":{},\"f:imagePullPolicy\":{},\"f:workloadUpdateStrategy\":{}}}", }, Subresource: "", }, { Manager: "virt-operator", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-26T03:48:18Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:kubevirt.io/latest-observed-api-version\":{},\"f:kubevirt.io/storage-observed-api-version\":{}},\"f:finalizers\":{\".\":{},\"v:\\\"foregroundDeleteKubeVirt\\\"\":{}}}}", }, Subresource: "", }, { Manager: "virt-controller", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-26T03:49:14Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\"f:outdatedVirtualMachineInstanceWorkloads\":{}}}", }, Subresource: "status", }, { Manager: "tests.test", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-26T05:54:57Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:spec\":{\"f:configuration\":{\"f:changedBlockTrackingLabelSelectors\":{\".\":{},\"f:namespaceLabelSelector\":{},\"f:virtualMachineLabelSelector\":{}},\"f:developerConfiguration\":{\".\":{},\"f:featureGates\":{}},\"f:imagePullPolicy\":{},\"f:seccompConfiguration\":{\".\":{},\"f:virtualMachineInstanceProfile\":{\".\":{},\"f:customProfile\":{\".\":{},\"f:localhostProfile\":{}}}}}}}", }, Subresource: "", }, { Manager: "virt-operator", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-26T05:55:11Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\".\":{},\"f:conditions\":{},\"f:defaultArchitecture\":{},\"f:generations\":{},\"f:observedDeploymentConfig\":{},\"f:observedDeploymentID\":{},\"f:observedGeneration\":{},\"f:observedKubeVirtRegistry\":{},\"f:observedKubeVirtVersion\":{},\"f:operatorVersion\":{},\"f:phase\":{},\"f:synchronizationAddresses\":{},\"f:targetDeploymentConfig\":{},\"f:targetDeploymentID\":{},\"f:targetKubeVirtRegistry\":{},\"f:targetKubeVirtVersion\":{}}}", }, Subresource: "status", }, ], }, Spec: { ImageTag: "", ImageRegistry: "", ImagePullPolicy: "IfNotPresent", ImagePullSecrets: nil, MonitorNamespace: "", ServiceMonitorNamespace: "", MonitorAccount: "", WorkloadUpdateStrategy: { WorkloadUpdateMethods: nil, BatchEvictionSize: nil, BatchEvictionInterval: nil, }, UninstallStrategy: "", CertificateRotationStrategy: {SelfSigned: nil}, ProductVersion: "", ProductName: "", ProductComponent: "", SynchronizationPort: "", Configuration: { CPUModel: "", CPURequest: nil, DeveloperConfiguration: { FeatureGates: [ "NodeRestriction", "CPUManager", "ExperimentalIgnitionSupport", "Sidecar", "Snapshot", "IncrementalBackup", "HostDisk", "EnableVirtioFsStorageVolumes", "DownwardMetrics", "ExpandDisks", "WorkloadEncryptionSEV", "VMExport", "KubevirtSeccompProfile", "ObjectGraph", "DeclarativeHotplugVolumes", "NodeRestriction", "DecentralizedLiveMigration", "PanicDevices", "VideoConfig", "UtilityVolumes", "MigrationPriorityQueue", "RebootPolicy", "ContainerPathVolumes", ], DisabledFeatureGates: nil, LessPVCSpaceToleration: 0, MinimumReservePVCBytes: 0, MemoryOvercommit: 0, NodeSelectors: nil, UseEmulation: false, CPUAllocationRatio: 0, MinimumClusterTSCFrequency: nil, DiskVerification: nil, LogVerbosity: nil, ClusterProfiler: false, }, EmulatedMachines: nil, ImagePullPolicy: "IfNotPresent", MigrationConfiguration: nil, MachineType: "", NetworkConfiguration: nil, OVMFPath: "", SELinuxLauncherType: "", DefaultRuntimeClass: "", SMBIOSConfig: nil, ArchitectureConfiguration: nil, EvictionStrategy: nil, AdditionalGuestMemoryOverheadRatio: nil, SupportContainerResources: nil, SupportedGuestAgentVersions: nil, MemBalloonStatsPeriod: nil, PermittedHostDevices: nil, MediatedDevicesConfiguration: nil, DeprecatedMinCPUModel: "", ObsoleteCPUModels: nil, VirtualMachineInstancesPerNode: nil, APIConfiguration: nil, WebhookConfiguration: nil, ControllerConfiguration: nil, HandlerConfiguration: nil, TLSConfiguration: nil, SeccompConfiguration: { VirtualMachineInstanceProfile: { CustomProfile: { LocalhostProfile: "kubevirt/kubevirt.json", RuntimeDefaultProfile: false, }, }, }, VMStateStorageClass: "", VirtualMachineOptions: nil, KSMConfiguration: nil, AutoCPULimitNamespaceLabelSelector: nil, LiveUpdateConfiguration: nil, VMRolloutStrategy: nil, CommonInstancetypesDeployment: nil, VirtTemplateDeployment: nil, Instancetype: nil, Hypervisors: nil, ChangedBlockTrackingLabelSelectors: { NamespaceLabelSelector: { MatchLabels: { "changedBlockTracking": "true", }, MatchExpressions: nil, }, VirtualMachineLabelSelector: { MatchLabels: { "changedBlockTracking": "true", }, MatchExpressions: nil, }, }, }, Infra: nil, Workloads: nil, CustomizeComponents: {Patches: nil, Flags: nil}, }, Status: { Phase: "Deployed", Conditions: [ { Type: "Available", Status: "True", LastProbeTime: { Time: 2026-02-26T05:55:07Z, }, LastTransitionTime: { Time: 2026-02-26T05:55:07Z, }, Reason: "AllComponentsReady", Message: "All components are ready.", }, { Type: "Progressing", Status: "False", LastProbeTime: { Time: 2026-02-26T05:55:07Z, }, LastTransitionTime: { Time: 2026-02-26T05:55:07Z, }, Reason: "AllComponentsReady", Message: "All components are ready.", }, { Type: "Degraded", Status: "False", LastProbeTime: { Time: 2026-02-26T05:55:07Z, }, LastTransitionTime: { Time: 2026-02-26T05:55:07Z, }, Reason: "AllComponentsReady", Message: "All components are ready.", }, { Type: "Created", Status: "True", LastProbeTime: { Time: 2026-02-26T03:49:09Z, }, LastTransitionTime: { Time: 0001-01-01T00:00:00Z, }, Reason: "AllResourcesCreated", Message: "All resources were created.", }, ], OperatorVersion: "v1.8.0-beta.0.327+b4245cee0a648c", TargetKubeVirtRegistry: "registry:5000/kubevirt", TargetKubeVirtVersion: "devel", TargetDeploymentConfig: "{\"id\":\"14c07b657a87bc1803569b384655fed24bb172dc\",\"namespace\":\"kubevirt\",\"registry\":\"registry:5000/kubevirt\",\"kubeVirtVersion\":\"devel\",\"virtOperatorImage\":\"registry:5000/kubevirt/virt-operator:devel\",\"additionalProperties\":{\"CertificateRotationStrategy\":\"\\u003cv1.KubeVirtCertificateRotateStrategy Value\\u003e\",\"Configuration\":\"\\u003cv1.KubeVirtConfiguration Value\\u003e\",\"CustomizeComponents\":\"\\u003cv1.CustomizeComponents Value\\u003e\",\"HypervisorName\":\"kvm\",\"ImagePullPolicy\":\"IfNotPresent\",\"ImagePullSecrets\":\"null\",\"Infra\":\"\\u003c*v1.ComponentConfig Value\\u003e\",\"MonitorAccount\":\"\",\"MonitorNamespace\":\"\",\"ProductComponent\":\"\",\"ProductName\":\"\",\"ProductVersion\":\"\",\"ServiceMonitorNamespace\":\"\",\"SynchronizationPort\":\"\",\"UninstallStrategy\":\"\",\"WorkloadUpdateStrategy\":\"\\u003cv1.KubeVirtWorkloadUpdateStrategy Value\\u003e\",\"Workloads\":\"\\u003c*v1.ComponentConfig Value\\u003e\"}}", TargetDeploymentID: "14c07b657a87bc1803569b384655fed24bb172dc", ObservedKubeVirtRegistry: "registry:5000/kubevirt", ObservedKubeVirtVersion: "devel", ObservedDeploymentConfig: "{\"id\":\"14c07b657a87bc1803569b384655fed24bb172dc\",\"namespace\":\"kubevirt\",\"registry\":\"registry:5000/kubevirt\",\"kubeVirtVersion\":\"devel\",\"virtOperatorImage\":\"registry:5000/kubevirt/virt-operator:devel\",\"additionalProperties\":{\"CertificateRotationStrategy\":\"\\u003cv1.KubeVirtCertificateRotateStrategy Value\\u003e\",\"Configuration\":\"\\u003cv1.KubeVirtConfiguration Value\\u003e\",\"CustomizeComponents\":\"\\u003cv1.CustomizeComponents Value\\u003e\",\"HypervisorName\":\"kvm\",\"ImagePullPolicy\":\"IfNotPresent\",\"ImagePullSecrets\":\"null\",\"Infra\":\"\\u003c*v1.ComponentConfig Value\\u003e\",\"MonitorAccount\":\"\",\"MonitorNamespace\":\"\",\"ProductComponent\":\"\",\"ProductName\":\"\",\"ProductVersion\":\"\",\"ServiceMonitorNamespace\":\"\",\"SynchronizationPort\":\"\",\"UninstallStrategy\":\"\",\"WorkloadUpdateStrategy\":\"\\u003cv1.KubeVirtWorkloadUpdateStrategy Value\\u003e\",\"Workloads\":\"\\u003c*v1.ComponentConfig Value\\u003e\"}}", ObservedDeploymentID: "14c07b657a87bc1803569b384655fed24bb172dc", OutdatedVirtualMachineInstanceWorkloads: 0, ObservedGeneration: 135, DefaultArchitecture: "amd64", Generations: [ { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineinstances.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineinstancepresets.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineinstancereplicasets.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachines.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineinstancemigrations.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinesnapshots.snapshot.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinesnapshotcontents.snapshot.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinerestores.snapshot.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineinstancetypes.instancetype.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineclusterinstancetypes.instancetype.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinepools.pool.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "migrationpolicies.migrations.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinepreferences.instancetype.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineclusterpreferences.instancetype.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineexports.export.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineclones.clone.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinebackups.backup.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinebackuptrackers.backup.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "admissionregistration.k8s.io", Resource: "validatingwebhookconfigurations", Namespace: "", Name: "virt-operator-validator", LastGeneration: 228, Hash: "", }, { Group: "admissionregistration.k8s.io", Resource: "validatingwebhookconfigurations", Namespace: "", Name: "virt-api-validator", LastGeneration: 228, Hash: "", }, { Group: "admissionregistration.k8s.io", Resource: "mutatingwebhookconfigurations", Namespace: "", Name: "virt-api-mutator", LastGeneration: 227, Hash: "", }, { Group: "apps", Resource: "deployments", Namespace: "kubevirt", Name: "virt-api", LastGeneration: 136, Hash: "", }, { Group: "apps", Resource: "poddisruptionbudgets", Namespace: "kubevirt", Name: "virt-api-pdb", LastGeneration: 1, Hash: "", }, { Group: "apps", Resource: "deployments", Namespace: "kubevirt", Name: "virt-controller", LastGeneration: 134, Hash: "", }, { Group: "apps", Resource: "poddisruptionbudgets", Namespace: "kubevirt", Name: "virt-controller-pdb", LastGeneration: 1, Hash: "", }, { Group: "apps", Resource: "daemonsets", Namespace: "kubevirt", Name: "virt-handler", LastGeneration: 3, Hash: "", }, { Group: "admissionregistration.k8s.io", Resource: "mutatingwebhookconfigurations", Namespace: "", Name: "virt-launcher-pod-mutator", LastGeneration: 52, Hash: "", }, { Group: "apps", Resource: "deployments", Namespace: "kubevirt", Name: "virt-exportproxy", LastGeneration: 26, Hash: "", }, { Group: "apps", Resource: "poddisruptionbudgets", Namespace: "kubevirt", Name: "virt-exportproxy-pdb", LastGeneration: 1, Hash: "", }, { Group: "apps", Resource: "deployments", Namespace: "kubevirt", Name: "virt-synchronization-controller", LastGeneration: 26, Hash: "", }, { Group: "apps", Resource: "poddisruptionbudgets", Namespace: "kubevirt", Name: "virt-synchronization-controller-pdb", LastGeneration: 1, Hash: "", }, ], SynchronizationAddresses: ["10.244.0.63:9185", "fd10:244::3f:9185"], }, } to satisfy predicate <func(*v1.KubeVirt) bool>: 0x20a1d40 tests/testsuite/fixture.go:195
[ref_id:2717][sig-compute]KubeVirt control plane resilience pod eviction evicting pods of control plane [test_id:2830]last eviction should fail for multi-replica virt-controller pods tests/virt_control_plane_test.go:135 Should list compute nodeList Unexpected error: <*url.Error | 0xc00760e450>: Get "https://127.0.0.1:41963/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue": dial tcp 127.0.0.1:41963: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:41963/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue", Err: <*net.OpError | 0xc00859ecd0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003974810>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 41963, Zone: "", }, Err: <*os.SyscallError | 0xc0043b0ba0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libnode/node.go:300
[ref_id:2717][sig-compute]KubeVirt control plane resilience pod eviction evicting pods of control plane [test_id:2799]last eviction should fail for multi-replica virt-api pods tests/virt_control_plane_test.go:135 Should list compute nodeList Unexpected error: <*url.Error | 0xc006c1e090>: Get "https://127.0.0.1:41963/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue": dial tcp 127.0.0.1:41963: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:41963/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue", Err: <*net.OpError | 0xc0006ae370>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002f6cc30>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 41963, Zone: "", }, Err: <*os.SyscallError | 0xc002e15460>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libnode/node.go:300
[ref_id:2717][sig-compute]KubeVirt control plane resilience control plane components check when control plane pods are running [test_id:2806]virt-controller and virt-api pods have a pod disruption budget tests/virt_control_plane_test.go:180 Unexpected error: <*url.Error | 0xc005216000>: Get "https://127.0.0.1:41963/apis/apps/v1/namespaces/kubevirt/deployments": dial tcp 127.0.0.1:41963: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:41963/apis/apps/v1/namespaces/kubevirt/deployments", Err: <*net.OpError | 0xc0006a2be0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00760f770>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 41963, Zone: "", }, Err: <*os.SyscallError | 0xc0043b01c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/virt_control_plane_test.go:184
[ref_id:2717][sig-compute]KubeVirt control plane resilience control plane components check when Control plane pods temporarily lose connection to Kubernetes API should fail health checks when connectivity is lost, and recover when connectivity is regained tests/virt_control_plane_test.go:240 Unexpected error: <*url.Error | 0xc00563b3e0>: Get "https://127.0.0.1:41963/apis/apps/v1/namespaces/kubevirt/daemonsets/virt-handler": dial tcp 127.0.0.1:41963: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:41963/apis/apps/v1/namespaces/kubevirt/daemonsets/virt-handler", Err: <*net.OpError | 0xc00728bdb0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002f6c2a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 41963, Zone: "", }, Err: <*os.SyscallError | 0xc0084ff1c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/virt_control_plane_test.go:241
AfterSuite tests/tests_suite_test.go:107 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc006d7e780>: Get "https://127.0.0.1:41963/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:41963: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:41963/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc006b49b80>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001fc1650>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 41963, Zone: "", }, Err: <*os.SyscallError | 0xc0047f5700>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16865/pull-kubevirt-e2e-k8s-1.35-sig-compute-serial/2026630206847979520
Test Name Failure Message
[sig-compute] Infrastructure Node-labeller node with obsolete host-model cpuModel should not schedule vmi with host-model cpuModel to node with obsolete host-model cpuModel tests/infrastructure/node-labeller.go:344 Timed out after 308.564s. One of the Kubevirt control-plane components is not ready. The function passed to Eventually failed at tests/testsuite/fixture.go:193 with: Unexpected error: <*rest.wrapPreviousError | 0xc007a8bee0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt": dial tcp 127.0.0.1:34743: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:57772->127.0.0.1:34743: read: connection reset by peer { currentErr: <*url.Error | 0xc001aa0ae0>{ Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt", Err: <*net.OpError | 0xc006721c20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001474390>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc007a8be80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc0097b9ae0>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc004289c50>{IP: [127, 0, 0, 1], Port: 57772, Zone: ""}, Addr: <*net.TCPAddr | 0xc004289cb0>{IP: [127, 0, 0, 1], Port: 34743, Zone: ""}, Err: <*os.SyscallError | 0xc003cae360>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred At one point, however, the function did return successfully. Yet, Eventually failed because the matcher was not satisfied: Expected <[]interface {} | len:4, cap:4>: [ <map[string]interface {} | len:6>{ "lastProbeTime": <string>"2026-02-25T13:50:32Z", "lastTransitionTime": <string>"2026-02-25T13:50:32Z", "reason": <string>"DeploymentInProgress", "message": <string>"Deploying version devel with registry registry:5000/kubevirt", "type": <string>"Available", "status": <string>"False", }, <map[string]interface {} | len:6>{ "message": <string>"Deploying version devel with registry registry:5000/kubevirt", "type": <string>"Progressing", "status": <string>"True", "lastProbeTime": <string>"2026-02-25T13:50:32Z", "lastTransitionTime": <string>"2026-02-25T13:50:32Z", "reason": <string>"DeploymentInProgress", }, <map[string]interface {} | len:6>{ "lastProbeTime": <string>"2026-02-25T13:50:32Z", "lastTransitionTime": <string>"2026-02-25T13:50:32Z", "reason": <string>"DeploymentInProgress", "message": <string>"Deploying version devel with registry registry:5000/kubevirt", "type": <string>"Degraded", "status": <string>"False", }, <map[string]interface {} | len:6>{ "message": <string>"All resources were created.", "type": <string>"Created", "status": <string>"True", "lastProbeTime": <string>"2026-02-25T12:28:42Z", "lastTransitionTime": nil, "reason": <string>"AllResourcesCreated", }, ] to find condition of type 'Available' and status 'True' but got 'False' tests/testsuite/fixture.go:195
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations simple default clone tests/clone_test.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc004b906c0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00640b450>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00621d6b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc00640ce20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations simple clone with snapshot source, create clone before snapshot tests/clone_test.go:56 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc005100570>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00055b6d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc006648d20>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc00a7c82e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations clone with only some of labels/annotations tests/clone_test.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0081f24e0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0097b99a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001475290>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc003029ca0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations clone with only some of template.labels/template.annotations tests/clone_test.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0070fbcb0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc006614cd0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002c91bf0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc0087d8b00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations clone with changed MAC address tests/clone_test.go:56 Timed out after 10.010s. Unexpected error: <*url.Error | 0xc007a0ea20>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": net/http: TLS handshake timeout { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <http.tlsHandshakeTimeoutError>{}, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations regarding domain Firmware clone with changed SMBios serial tests/clone_test.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0064083c0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0098371d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00654a7e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc00062bf20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations regarding domain Firmware should strip firmware UUID tests/clone_test.go:56 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0049d6bd0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc002bdbd60>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc006442ff0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc00a2466a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates [test_id:4099] should be rotated when a new CA is created tests/infrastructure/certificates.go:69 Unexpected error: <*url.Error | 0xc0028499b0>: Get "https://127.0.0.1:34743/api/v1/namespaces/kubevirt/configmaps/kubevirt-ca": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/namespaces/kubevirt/configmaps/kubevirt-ca", Err: <*net.OpError | 0xc002a3f090>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007182d50>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc00722a6e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libinfra/certificates.go:59
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates [sig-compute][test_id:4100] should be valid during the whole rotation process tests/infrastructure/certificates.go:136 Unexpected error: <*url.Error | 0xc00688c150>: Get "https://127.0.0.1:34743/api/v1/namespaces/kubevirt/pods?labelSelector=kubevirt.io%3Dvirt-api": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/namespaces/kubevirt/pods?labelSelector=kubevirt.io%3Dvirt-api", Err: <*net.OpError | 0xc003eceaa0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001f02990>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc00409c680>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libpod/certs.go:51
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4101] virt-operator tests/infrastructure/certificates.go:188 Unexpected error: <*url.Error | 0xc008831800>: Patch "https://127.0.0.1:34743/api/v1/namespaces/kubevirt/secrets/kubevirt-operator-certs": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:34743/api/v1/namespaces/kubevirt/secrets/kubevirt-operator-certs", Err: <*net.OpError | 0xc0067210e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002502870>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc0032f7de0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4103] virt-api tests/infrastructure/certificates.go:189 Unexpected error: <*url.Error | 0xc007af0510>: Patch "https://127.0.0.1:34743/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-api-certs": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:34743/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-api-certs", Err: <*net.OpError | 0xc00743f900>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0036d9680>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc0047c6d20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4104] virt-controller tests/infrastructure/certificates.go:190 Unexpected error: <*url.Error | 0xc0020c4150>: Patch "https://127.0.0.1:34743/api/v1/namespaces/kubevirt/secrets/kubevirt-controller-certs": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:34743/api/v1/namespaces/kubevirt/secrets/kubevirt-controller-certs", Err: <*net.OpError | 0xc006614780>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0035bc870>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc003d2ad20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4105] virt-handlers client side tests/infrastructure/certificates.go:191 Unexpected error: <*url.Error | 0xc004814ba0>: Patch "https://127.0.0.1:34743/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-certs": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:34743/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-certs", Err: <*net.OpError | 0xc00616fea0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003a1e480>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc005105260>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4106] virt-handlers server side tests/infrastructure/certificates.go:192 Unexpected error: <*url.Error | 0xc003a1e4e0>: Patch "https://127.0.0.1:34743/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-server-certs": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:34743/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-server-certs", Err: <*net.OpError | 0xc00640a820>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00083e840>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc009831920>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute]VSOCK VM creation should expose a VSOCK device Use virtio transitional tests/vmi_vsock_test.go:59 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc000c279e0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00330a2d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0057bb8c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc00846ee60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK VM creation should expose a VSOCK device Use virtio non-transitional tests/vmi_vsock_test.go:59 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc005a16f30>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc003bbe690>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003787800>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc008c0b9e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK Live migration should retain the CID for migration target tests/vmi_vsock_test.go:59 Timed out after 14.009s. Unexpected error: <*url.Error | 0xc003680000>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": net/http: TLS handshake timeout { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <http.tlsHandshakeTimeoutError>{}, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK communicating with VMI via VSOCK should succeed with TLS on both sides tests/vmi_vsock_test.go:59 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc003ffda70>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00280ee60>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001ee2510>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc006b6fe60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK communicating with VMI via VSOCK should succeed without TLS on both sides tests/vmi_vsock_test.go:59 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc006623290>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00657e9b0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0068e6660>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc007388900>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK should return err if the port is invalid tests/vmi_vsock_test.go:59 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc006ccbbf0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0068ae7d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008831920>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc008886ec0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK should return err if no app listerns on the port tests/vmi_vsock_test.go:59 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc00168cff0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0072f8aa0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007e7e870>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc003028a60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] InstancetypeReferencePolicy should result in running VirtualMachine when set to reference tests/instancetype/reference_policy.go:96 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc00a516390>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00858ceb0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007af0570>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc00640d620>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] InstancetypeReferencePolicy should result in running VirtualMachine when set to expand tests/instancetype/reference_policy.go:97 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc007a81f20>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00616e5a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007af1080>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc004ad3120>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] InstancetypeReferencePolicy should result in running VirtualMachine when set to expandAll tests/instancetype/reference_policy.go:98 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc004c35830>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00a7aa690>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003ffc5a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc009787de0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2065][crit:medium][vendor:cnv-qe@redhat.com][level:component]with 3 CPU cores [test_id:1659]should report 3 cpu cores under guest OS tests/vmi_configuration_test.go:186 Unexpected error: <*url.Error | 0xc0037d2900>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc0067218b0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0084c8300>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc006b6f120>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:187
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2065][crit:medium][vendor:cnv-qe@redhat.com][level:component]with 3 CPU cores [test_id:1660]should report 3 sockets under guest OS tests/vmi_configuration_test.go:186 Unexpected error: <*url.Error | 0xc006622e10>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc00743f180>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc006c4fbf0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc0074448a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:187
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2065][crit:medium][vendor:cnv-qe@redhat.com][level:component]with 3 CPU cores [test_id:1661]should report 2 sockets from spec.domain.resources.requests under guest OS tests/vmi_configuration_test.go:186 Unexpected error: <*url.Error | 0xc0084c8330>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc0081a4500>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0063dc1e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc008098900>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:187
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2065][crit:medium][vendor:cnv-qe@redhat.com][level:component]with 3 CPU cores [test_id:1662]should report 2 sockets from spec.domain.resources.limits under guest OS tests/vmi_configuration_test.go:186 Unexpected error: <*url.Error | 0xc00621c4b0>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc00a0f3810>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008831890>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc002e5db80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:187
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2065][crit:medium][vendor:cnv-qe@redhat.com][level:component]with 3 CPU cores [test_id:1663]should report 2 vCPUs under guest OS tests/vmi_configuration_test.go:186 Unexpected error: <*url.Error | 0xc007e7e990>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc00a205c70>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0081f3dd0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc003028800>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:187
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2065][crit:medium][vendor:cnv-qe@redhat.com][level:component]with 3 CPU cores [test_id:1665]should map cores to virtio net queues tests/vmi_configuration_test.go:186 Unexpected error: <*url.Error | 0xc0081f3e00>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc0006401e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002ad1d10>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc0044f2340>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:187
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2262][crit:medium][vendor:cnv-qe@redhat.com][level:component]with EFI bootloader method [test_id:1668]should use EFI without secure boot tests/vmi_configuration_test.go:530 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0078f6ff0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc009837810>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002c918f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc0087d9f00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2262][crit:medium][vendor:cnv-qe@redhat.com][level:component]with EFI bootloader method [test_id:4437]should enable EFI secure boot tests/vmi_configuration_test.go:531 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0070090e0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc001ec7220>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007a80f00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc003317980>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:140][crit:medium][vendor:cnv-qe@redhat.com][level:component]with guestAgent with cluster config changes [test_id:5267]VMI condition should signal unsupported agent presence tests/vmi_configuration_test.go:950 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc007af0960>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc006789d60>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004c346c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc009786440>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:140][crit:medium][vendor:cnv-qe@redhat.com][level:component]with guestAgent with cluster config changes [test_id:6958]VMI condition should not signal unsupported agent presence for optional commands tests/vmi_configuration_test.go:950 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0037d34a0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0067217c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0045af380>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc00257e820>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations VirtualMachineInstance definition using defaultRuntimeClass configuration should apply runtimeClassName to pod when set tests/vmi_configuration_test.go:1208 Expected success, but got an error: <*url.Error | 0xc0045af3b0>: Post "https://127.0.0.1:34743/apis/node.k8s.io/v1/runtimeclasses": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Post", URL: "https://127.0.0.1:34743/apis/node.k8s.io/v1/runtimeclasses", Err: <*net.OpError | 0xc00657f1d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc006cca870>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc006b6e3a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } tests/vmi_configuration_test.go:1214
[sig-compute]Configurations VirtualMachineInstance definition with geust-to-request memory should add guest-to-memory headroom tests/vmi_configuration_test.go:1270 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0009ef260>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0097b9720>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc006c4fbf0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc007389b40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations [rfe_id:2869][crit:medium][vendor:cnv-qe@redhat.com][level:component]with machine type settings [test_id:3124]should set status.machine to the resolved QEMU machine type after VMI start tests/vmi_configuration_test.go:1438 Timed out after 13.015s. Unexpected error: <*url.Error | 0xc005ee9830>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": net/http: TLS handshake timeout { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <http.tlsHandshakeTimeoutError>{}, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations [rfe_id:2869][crit:medium][vendor:cnv-qe@redhat.com][level:component]with machine type settings [test_id:3126]should set machine type from kubevirt-config tests/vmi_configuration_test.go:1438 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc001bab3b0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0081a5400>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004902810>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc008887280>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations [rfe_id:140][crit:medium][vendor:cnv-qe@redhat.com][level:component]with CPU request settings [test_id:3129]should set CPU request from kubevirt-config tests/vmi_configuration_test.go:1526 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc007a80240>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00778ad20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0049d7740>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc004ad3400>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations with automatic CPU limit configured in the CR should not set a CPU limit if the namespace doesn't match the selector tests/vmi_configuration_test.go:1547 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc001f024b0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00616e500>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004464cf0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc003f59640>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations with automatic CPU limit configured in the CR should set a CPU limit if the namespace matches the selector tests/vmi_configuration_test.go:1547 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0066223f0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0063b3a90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009427c80>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc005c1bc20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:1685]non master node should have a cpumanager label tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc009427cb0>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc003894460>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00997b2f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc00384e8e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:991]should be scheduled on a node with running cpu manager tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc00621c900>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc0097b91d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0084c8720>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc006b6e6e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:4632]should be able to start a vm with guest memory different from requested and keep guaranteed qos tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc0084c9a40>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc00a0f3540>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009c29260>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc0032f6ec0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:4023]should start a vmi with dedicated cpus and isolated emulator thread with explicit resources set tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc0008b1890>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc00330af00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00a517020>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc0047c6f80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:4023]should start a vmi with dedicated cpus and isolated emulator thread without resource requirements set tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc00487a4b0>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc0084fe370>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0078f6ea0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc00210d160>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:4024]should fail the vmi creation if IsolateEmulatorThread requested without dedicated cpus tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc0050860c0>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc0081a4000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0070fbce0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc0044f20c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:802]should configure correct number of vcpus with requests.cpus tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc007af0ff0>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc000641bd0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007a81d10>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc004ad23a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:1688]should fail the vmi creation if the requested resources are inconsistent tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc0070fbd10>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc0020ea190>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004465e90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc007a8af20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:1689]should fail the vmi creation if cpu is not an integer tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc0045ae300>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc002484870>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc006622a20>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc000cbac20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:1690]should fail the vmi creation if Guaranteed QOS cannot be set tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc0014759b0>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc006720c30>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001475980>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc006bc4c00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:830]should start a vm with no cpu pinning after a vm with cpu pinning on same node tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc00495e3f0>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc0058f0be0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00168dd40>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc005c1bea0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning cpu pinning with fedora images, dedicated and non dedicated cpu should be possible on same node via spec.domain.cpu.cores [test_id:829]should start a vm with no cpu pinning after a vm with cpu pinning on same node tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc006c4fe60>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc00743fe50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003df4a50>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc00384fa00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning cpu pinning with fedora images, dedicated and non dedicated cpu should be possible on same node via spec.domain.cpu.cores [test_id:832]should start a vm with cpu pinning after a vm with no cpu pinning on same node tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc009c28fc0>: Get "https://127.0.0.1:34743/api/v1/nodes": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/api/v1/nodes", Err: <*net.OpError | 0xc00a0f28c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004d80600>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc007388d00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:2926][crit:medium][vendor:cnv-qe@redhat.com][level:component]Check Chassis value [test_id:2927]Test Chassis value in a newly created VM tests/vmi_configuration_test.go:2179 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc000908480>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc009836b90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0070039b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc007f1aaa0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations Custom PCI Addresses configuration should configure custom pci address [test_id:5269]across all available PCI root bus slots tests/vmi_configuration_test.go:2316 Test Panicked tests/libnet/cloudinit/cloudinit.go:192 Panic: failed defining network data when running options: failed defining network data ethernet device when running options: failed defining network data nameservers when retrieving cluster DNS service IP: unable to detect the DNS services: Get "https://127.0.0.1:34743/api/v1/namespaces/kube-system/services/kube-dns": dial tcp 127.0.0.1:34743: connect: connection refused, Get "https://127.0.0.1:34743/api/v1/namespaces/openshift-dns/services/dns-default": dial tcp 127.0.0.1:34743: connect: connection refused Full stack: kubevirt.io/kubevirt/tests/libnet/cloudinit.CreateDefaultCloudInitNetworkData() tests/libnet/cloudinit/cloudinit.go:192 +0x154 kubevirt.io/kubevirt/tests/libnet.WithMasqueradeNetworking(...) tests/libnet/vmibuilder.go:32 tests/go_default_test_test.init.func29.16.4(0x2, 0x18, 0x0) tests/vmi_configuration_test.go:2293 +0x5e reflect.Value.call({0x2d5a800?, 0xc000c00300?, 0x13?}, {0x3325ac7, 0x4}, {0xc0024f2aa0, 0x3, 0x3?}) GOROOT/src/reflect/value.go:584 +0xca6 reflect.Value.Call({0x2d5a800?, 0xc000c00300?, 0x390fc20?}, {0xc0024f2aa0?, 0xc007cb9e00?, 0x4590b0?}) GOROOT/src/reflect/value.go:368 +0xb9
[sig-network] [crit:high][vendor:cnv-qe@redhat.com][level:component] [crit:high][vendor:cnv-qe@redhat.com][level:component]Creating a VirtualMachineInstance when virt-handler is responsive VMIs shouldn't fail after the kubelet restarts [sig-compute]with default networking tests/network/vmi_lifecycle.go:109 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0072c45d0>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc002a905a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007b39470>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc008bb8500>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
AfterSuite tests/tests_suite_test.go:107 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc003680e70>: Get "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34743: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34743/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00392fd60>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0036d86c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34743, Zone: "", }, Err: <*os.SyscallError | 0xc0098307e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16846/pull-kubevirt-e2e-k8s-1.35-sig-compute-serial/2024400384847515648
Test Name Failure Message
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning cpu pinning with fedora images, dedicated and non dedicated cpu should be possible on same node via spec.domain.cpu.cores [test_id:829]should start a vm with no cpu pinning after a vm with cpu pinning on same node tests/vmi_configuration_test.go:2127 Expected success, but got an error: <expect.TimeoutError>: expect: timer expired after 120 seconds 120000000000 tests/vmi_configuration_test.go:2148
[sig-compute]VSOCK should return err if the port is invalid tests/vmi_vsock_test.go:227 Waited for 62 seconds on the event stream to match a specific event: event type Normal, reason = Started tests/watcher/watcher.go:233
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16806/pull-kubevirt-e2e-k8s-1.35-sig-compute-serial/2027017259670573056
Test Name Failure Message
[sig-compute] Infrastructure cluster profiler for pprof data aggregation when ClusterProfiler configuration is enabled it should allow subresource access tests/infrastructure/cluster-profiler.go:61 Unexpected error: <*errors.StatusError | 0xc0091b3180>: an error on the server ("Internal error encountered: Get \"https://10.244.0.18:8443/dump-profiler\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)") has prevented the request from succeeding { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "an error on the server (\"Internal error encountered: Get \\\"https://10.244.0.18:8443/dump-profiler\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\") has prevented the request from succeeding", Reason: "InternalError", Details: { Name: "", Group: "", Kind: "", UID: "", Causes: [ { Type: "UnexpectedServerResponse", Message: "Internal error encountered: Get \"https://10.244.0.18:8443/dump-profiler\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)", Field: "", }, ], RetryAfterSeconds: 0, }, Code: 500, }, } occurred tests/infrastructure/cluster-profiler.go:72
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16806/pull-kubevirt-e2e-k8s-1.35-sig-compute-serial/2026941507390410752
Test Name Failure Message
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2065][crit:medium][vendor:cnv-qe@redhat.com][level:component]with 3 CPU cores [test_id:1659]should report 3 cpu cores under guest OS tests/tests_suite_test.go:109 Unexpected error: <*errors.StatusError | 0xc00560c000>: the server was unable to return a response in the time allotted, but may still be processing the request (get events) { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server was unable to return a response in the time allotted, but may still be processing the request (get events)", Reason: "Timeout", Details: { Name: "", Group: "", Kind: "events", UID: "", Causes: [ { Type: "UnexpectedServerResponse", Message: "{\"metadata\":{},\"status\":\"Failure\",\"message\":\"Timeout: request did not complete within the allotted timeout\",\"reason\":\"Timeout\",\"details\":{},\"code\":504}", Field: "", }, ], RetryAfterSeconds: 0, }, Code: 504, }, } occurred tests/testsuite/namespace.go:351
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2065][crit:medium][vendor:cnv-qe@redhat.com][level:component]with 3 CPU cores [test_id:1660]should report 3 sockets under guest OS tests/vmi_configuration_test.go:186 Unexpected error: <*rest.wrapPreviousError | 0xc001d9c280>: Get "https://127.0.0.1:36633/api/v1/nodes": net/http: TLS handshake timeout - error from a previous attempt: unexpected EOF { currentErr: <*url.Error | 0xc0096aa000>{ Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <http.tlsHandshakeTimeoutError>{}, }, previousError: <*errors.errorString | 0x5660dd0>{s: "unexpected EOF"}, } occurred tests/vmi_configuration_test.go:187
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2065][crit:medium][vendor:cnv-qe@redhat.com][level:component]with 3 CPU cores [test_id:1661]should report 2 sockets from spec.domain.resources.requests under guest OS tests/vmi_configuration_test.go:186 Unexpected error: <*url.Error | 0xc0018abe90>: Get "https://127.0.0.1:36633/api/v1/nodes": net/http: TLS handshake timeout { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <http.tlsHandshakeTimeoutError>{}, } occurred tests/vmi_configuration_test.go:187
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2065][crit:medium][vendor:cnv-qe@redhat.com][level:component]with 3 CPU cores [test_id:1662]should report 2 sockets from spec.domain.resources.limits under guest OS tests/vmi_configuration_test.go:186 Unexpected error: <*url.Error | 0xc0058f07e0>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc00047d360>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0074f2600>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0126fd720>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:187
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2065][crit:medium][vendor:cnv-qe@redhat.com][level:component]with 3 CPU cores [test_id:1663]should report 2 vCPUs under guest OS tests/vmi_configuration_test.go:186 Unexpected error: <*url.Error | 0xc005d76d80>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc0095ab810>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004ee1950>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc01af82620>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:187
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2065][crit:medium][vendor:cnv-qe@redhat.com][level:component]with 3 CPU cores [test_id:1665]should map cores to virtio net queues tests/vmi_configuration_test.go:186 Unexpected error: <*rest.wrapPreviousError | 0xc003905960>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:60700->127.0.0.1:36633: read: connection reset by peer { currentErr: <*url.Error | 0xc0057d4b40>{ Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc0130b7040>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001058990>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc003905920>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc007360d20>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc00045fb90>{IP: [127, 0, 0, 1], Port: 60700, Zone: ""}, Addr: <*net.TCPAddr | 0xc00045fbc0>{IP: [127, 0, 0, 1], Port: 36633, Zone: ""}, Err: <*os.SyscallError | 0xc005680b80>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred tests/vmi_configuration_test.go:187
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2262][crit:medium][vendor:cnv-qe@redhat.com][level:component]with EFI bootloader method [test_id:1668]should use EFI without secure boot tests/vmi_configuration_test.go:530 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc001b9d7a0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0025025a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001a51830>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0077ee9c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:2262][crit:medium][vendor:cnv-qe@redhat.com][level:component]with EFI bootloader method [test_id:4437]should enable EFI secure boot tests/vmi_configuration_test.go:531 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc007c60000>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0080db090>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009b949f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0069e1840>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:140][crit:medium][vendor:cnv-qe@redhat.com][level:component]with guestAgent with cluster config changes [test_id:5267]VMI condition should signal unsupported agent presence tests/vmi_configuration_test.go:950 Timed out after 10.484s. Unexpected error: <*rest.wrapPreviousError | 0xc005be0140>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:48366->127.0.0.1:36633: read: connection reset by peer { currentErr: <*url.Error | 0xc009985ef0>{ Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00a8afb80>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007a9e660>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc005be0100>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc00535d090>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc007a9e120>{IP: [127, 0, 0, 1], Port: 48366, Zone: ""}, Addr: <*net.TCPAddr | 0xc007a9e180>{IP: [127, 0, 0, 1], Port: 36633, Zone: ""}, Err: <*os.SyscallError | 0xc009165d00>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations VirtualMachineInstance definition [rfe_id:140][crit:medium][vendor:cnv-qe@redhat.com][level:component]with guestAgent with cluster config changes [test_id:6958]VMI condition should not signal unsupported agent presence for optional commands tests/vmi_configuration_test.go:950 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc006668bd0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00042c190>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00558ea20>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0069a4520>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations VirtualMachineInstance definition using defaultRuntimeClass configuration should apply runtimeClassName to pod when set tests/vmi_configuration_test.go:1208 Expected success, but got an error: <*url.Error | 0xc009112ab0>: Post "https://127.0.0.1:36633/apis/node.k8s.io/v1/runtimeclasses": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Post", URL: "https://127.0.0.1:36633/apis/node.k8s.io/v1/runtimeclasses", Err: <*net.OpError | 0xc000214550>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0019d6b70>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc005592080>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } tests/vmi_configuration_test.go:1214
[sig-compute]Configurations VirtualMachineInstance definition with geust-to-request memory should add guest-to-memory headroom tests/vmi_configuration_test.go:1270 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc005bf0090>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc002d9cb90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0080ea480>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0045cc420>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations [rfe_id:2869][crit:medium][vendor:cnv-qe@redhat.com][level:component]with machine type settings [test_id:3124]should set status.machine to the resolved QEMU machine type after VMI start tests/vmi_configuration_test.go:1438 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc003d76030>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc007360000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001870960>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc004c76000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations [rfe_id:2869][crit:medium][vendor:cnv-qe@redhat.com][level:component]with machine type settings [test_id:3126]should set machine type from kubevirt-config tests/vmi_configuration_test.go:1438 Timed out after 15.016s. Unexpected error: <*url.Error | 0xc00917a7b0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": net/http: TLS handshake timeout { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <http.tlsHandshakeTimeoutError>{}, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations [rfe_id:140][crit:medium][vendor:cnv-qe@redhat.com][level:component]with CPU request settings [test_id:3129]should set CPU request from kubevirt-config tests/vmi_configuration_test.go:1526 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0094ea300>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc002f3d540>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008bf1110>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc00650ada0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations with automatic CPU limit configured in the CR should not set a CPU limit if the namespace doesn't match the selector tests/vmi_configuration_test.go:1547 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc009b80600>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0004ee690>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000f8a5a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc00383fa40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations with automatic CPU limit configured in the CR should set a CPU limit if the namespace matches the selector tests/vmi_configuration_test.go:1547 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc00274aa50>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc007991770>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00480d530>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc005756360>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:1685]non master node should have a cpumanager label tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc008bf2c60>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc00998b180>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009984e10>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0098f6f20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:991]should be scheduled on a node with running cpu manager tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc0083329f0>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc0127f98b0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00780a9f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc003d24ac0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:4632]should be able to start a vm with guest memory different from requested and keep guaranteed qos tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc001720c60>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc007f1ea50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003d76210>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0048356e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:4023]should start a vmi with dedicated cpus and isolated emulator thread with explicit resources set tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc0018712c0>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc00042de50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00917a780>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc004c76d60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:4023]should start a vmi with dedicated cpus and isolated emulator thread without resource requirements set tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc0090c8930>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc002f3c0a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0009211a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc005be1e00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:4024]should fail the vmi creation if IsolateEmulatorThread requested without dedicated cpus tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc008bf0f00>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc0080dbe00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0096abe60>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc019fe3060>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:802]should configure correct number of vcpus with requests.cpus tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc0096abe90>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc0006a6370>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00a422690>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc00650aa40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:1688]should fail the vmi creation if the requested resources are inconsistent tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc00a4226c0>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc009ba5540>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0036f5c20>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc00742af00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:1689]should fail the vmi creation if cpu is not an integer tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc0058f0bd0>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc000690910>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0018d3290>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0058dcdc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:1690]should fail the vmi creation if Guaranteed QOS cannot be set tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc008bf38c0>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc000522050>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007412030>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc005756260>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning with cpu pinning enabled [test_id:830]should start a vm with no cpu pinning after a vm with cpu pinning on same node tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc005d76750>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc007f21450>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007413530>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc004982200>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning cpu pinning with fedora images, dedicated and non dedicated cpu should be possible on same node via spec.domain.cpu.cores [test_id:829]should start a vm with no cpu pinning after a vm with cpu pinning on same node tests/vmi_configuration_test.go:1761 Unexpected error: <*rest.wrapPreviousError | 0xc0031e31c0>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:45152->127.0.0.1:36633: read: connection reset by peer { currentErr: <*url.Error | 0xc01267e2a0>{ Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc00952fe00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004ee1200>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0031e3040>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc0095abae0>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc004ee0db0>{IP: [127, 0, 0, 1], Port: 45152, Zone: ""}, Addr: <*net.TCPAddr | 0xc004ee0de0>{IP: [127, 0, 0, 1], Port: 36633, Zone: ""}, Err: <*os.SyscallError | 0xc0083617a0>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning cpu pinning with fedora images, dedicated and non dedicated cpu should be possible on same node via spec.domain.cpu.cores [test_id:832]should start a vm with cpu pinning after a vm with no cpu pinning on same node tests/vmi_configuration_test.go:1761 Unexpected error: <*url.Error | 0xc0074124e0>: Get "https://127.0.0.1:36633/api/v1/nodes": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes", Err: <*net.OpError | 0xc0066665a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009d00bd0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc003d24ba0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/vmi_configuration_test.go:1763
[sig-compute]Configurations [rfe_id:2926][crit:medium][vendor:cnv-qe@redhat.com][level:component]Check Chassis value [test_id:2927]Test Chassis value in a newly created VM tests/vmi_configuration_test.go:2179 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc00165d6b0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc007f1e5a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001b9d920>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0126fc500>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Configurations Custom PCI Addresses configuration should configure custom pci address [test_id:5269]across all available PCI root bus slots tests/vmi_configuration_test.go:2316 Test Panicked tests/libnet/cloudinit/cloudinit.go:192 Panic: failed defining network data when running options: failed defining network data ethernet device when running options: failed defining network data nameservers when retrieving cluster DNS service IP: unable to detect the DNS services: Get "https://127.0.0.1:36633/api/v1/namespaces/kube-system/services/kube-dns": dial tcp 127.0.0.1:36633: connect: connection refused, Get "https://127.0.0.1:36633/api/v1/namespaces/openshift-dns/services/dns-default": dial tcp 127.0.0.1:36633: connect: connection refused Full stack: kubevirt.io/kubevirt/tests/libnet/cloudinit.CreateDefaultCloudInitNetworkData() tests/libnet/cloudinit/cloudinit.go:192 +0x154 kubevirt.io/kubevirt/tests/libnet.WithMasqueradeNetworking(...) tests/libnet/vmibuilder.go:32 tests/go_default_test_test.init.func29.16.4(0x2, 0x18, 0x0) tests/vmi_configuration_test.go:2293 +0x5e reflect.Value.call({0x2d5a800?, 0xc0068b6ea0?, 0x13?}, {0x3325ac7, 0x4}, {0xc0130b7180, 0x3, 0x3?}) GOROOT/src/reflect/value.go:584 +0xca6 reflect.Value.Call({0x2d5a800?, 0xc0068b6ea0?, 0x390faa0?}, {0xc0130b7180?, 0xc006f8ee00?, 0x4590b0?}) GOROOT/src/reflect/value.go:368 +0xb9
[sig-compute] Infrastructure [rfe_id:4126][crit:medium][vendor:cnv-qe@redhat.com][level:component]Taints and toleration CriticalAddonsOnly taint set on a node [test_id:4134] kubevirt components on that node should not evict tests/infrastructure/taints-and-tolerations.go:59 failed listing kubevirt pods Unexpected error: <*url.Error | 0xc01ae55080>: Get "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/pods": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/pods", Err: <*net.OpError | 0xc0032cccd0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004117890>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc005a6f380>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/taints-and-tolerations.go:65
[rfe_id:1177][crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute]VirtualMachine when node becomes unhealthy the VMs running in that node should be respawned tests/vm_test.go:991 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc009b80e10>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc002502c30>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0096ab440>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0077eed00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates [test_id:4099] should be rotated when a new CA is created tests/infrastructure/certificates.go:69 Unexpected error: <*url.Error | 0xc000ffcea0>: Get "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/configmaps/kubevirt-ca": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/configmaps/kubevirt-ca", Err: <*net.OpError | 0xc00839ce60>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009b94b70>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc00742ada0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libinfra/certificates.go:59
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates [sig-compute][test_id:4100] should be valid during the whole rotation process tests/infrastructure/certificates.go:136 Unexpected error: <*url.Error | 0xc008bf33e0>: Get "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/pods?labelSelector=kubevirt.io%3Dvirt-api": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/pods?labelSelector=kubevirt.io%3Dvirt-api", Err: <*net.OpError | 0xc00916d310>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000981f50>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc007eb44e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libpod/certs.go:51
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4101] virt-operator tests/infrastructure/certificates.go:188 Unexpected error: <*url.Error | 0xc007cf7bc0>: Patch "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/secrets/kubevirt-operator-certs": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/secrets/kubevirt-operator-certs", Err: <*net.OpError | 0xc00998a410>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008453020>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc019f3dde0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4103] virt-api tests/infrastructure/certificates.go:189 Unexpected error: <*url.Error | 0xc004852420>: Patch "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-api-certs": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-api-certs", Err: <*net.OpError | 0xc007f20c80>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00558f800>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc00002e240>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4104] virt-controller tests/infrastructure/certificates.go:190 Unexpected error: <*url.Error | 0xc002644ab0>: Patch "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/secrets/kubevirt-controller-certs": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/secrets/kubevirt-controller-certs", Err: <*net.OpError | 0xc00795d1d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002644a50>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc00406f820>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4105] virt-handlers client side tests/infrastructure/certificates.go:191 Unexpected error: <*url.Error | 0xc007412000>: Patch "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-certs": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-certs", Err: <*net.OpError | 0xc003290280>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002801e60>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc004982060>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4106] virt-handlers server side tests/infrastructure/certificates.go:192 Unexpected error: <*url.Error | 0xc002f47b90>: Patch "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-server-certs": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:36633/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-server-certs", Err: <*net.OpError | 0xc00289c050>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005a82780>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc003d25580>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure virt-handler should enable/disable ksm and add/remove annotation on all the nodes when the selector is empty tests/infrastructure/virt-handler.go:95 Should list compute nodeList Unexpected error: <*url.Error | 0xc0065a0780>: Get "https://127.0.0.1:36633/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue", Err: <*net.OpError | 0xc0095ab9f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc006631d70>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc006493f00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libnode/node.go:300
[sig-compute] virt-handler node restrictions via validatingAdmissionPolicy reject not allowed patches to node tests/validatingadmissionpolicy/noderestrictions.go:63 Unexpected error: <*url.Error | 0xc005a82810>: Get "https://127.0.0.1:36633/api?timeout=32s": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api?timeout=32s", Err: <*net.OpError | 0xc00345edc0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000ee4540>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0069a5020>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/validatingadmissionpolicy/noderestrictions.go:66
[sig-compute] virt-handler node restrictions via validatingAdmissionPolicy allow kubevirt related patches to node tests/validatingadmissionpolicy/noderestrictions.go:63 Unexpected error: <*url.Error | 0xc000fab830>: Get "https://127.0.0.1:36633/api?timeout=32s": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api?timeout=32s", Err: <*net.OpError | 0xc007361c70>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007c424b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc005be1400>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/validatingadmissionpolicy/noderestrictions.go:66
[sig-compute] virt-handler node restrictions via validatingAdmissionPolicy patching another node rejects kubevirt related patches tests/validatingadmissionpolicy/noderestrictions.go:63 Unexpected error: <*url.Error | 0xc00274a030>: Get "https://127.0.0.1:36633/api?timeout=32s": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/api?timeout=32s", Err: <*net.OpError | 0xc00839c000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002d2aff0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc00148c100>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/validatingadmissionpolicy/noderestrictions.go:66
[sig-compute]HostDevices with ephemeral disk with emulated PCI devices Should successfully passthrough an emulated PCI device tests/vmi_hostdev_test.go:42 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc009984f60>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00047caa0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008bf3ce0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0069e10c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]HostDevices with ephemeral disk with emulated PCI devices Should successfully passthrough 2 emulated PCI devices tests/vmi_hostdev_test.go:42 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc01267f350>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc007f20b40>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc006669b30>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0046ffb00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4136] should find one leading virt-controller and two ready tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc009b81980>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0025030e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0043c7ec0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc019fe3040>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4137]should find one leading virt-operator and two ready tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc00165cc60>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc002f6ad20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009d01f20>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0067843c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4138]should be exposed and registered on the metrics endpoint tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc000920000>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00289da90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc006630c00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc006493460>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4139]should return Prometheus metrics tests/infrastructure/prometheus.go:213 Timed out after 10.916s. Unexpected error: <*rest.wrapPreviousError | 0xc019f3d700>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:52728->127.0.0.1:36633: read: connection reset by peer { currentErr: <*url.Error | 0xc006630c30>{ Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0095aab90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002d81620>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc019f3d6c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc00745f9a0>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc002d80ea0>{IP: [127, 0, 0, 1], Port: 52728, Zone: ""}, Addr: <*net.TCPAddr | 0xc002d80ed0>{IP: [127, 0, 0, 1], Port: 36633, Zone: ""}, Err: <*os.SyscallError | 0xc008a51d60>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should throttle the Prometheus metrics access [test_id:4140] by using IPv4 tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc000faab10>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc002f3d1d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000ee5d10>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc004c76020>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should throttle the Prometheus metrics access [test_id:6226] by using IPv6 tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0058f1560>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc005ad3d60>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000ffcab0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc004e8a980>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4141]should include the metrics for a running VM tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc000981dd0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00947cc80>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008452180>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc00650b800>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should expose kubevirt_node_deprecated_machine_types metric tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc002801e00>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00795c640>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008453c50>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc019fe2e80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] storage flush requests metric tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc00558f830>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0130b7c20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001a51cb0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0098f7700>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] time spent on cache flushing metric tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc006630b40>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc006667c20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0065a1710>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0042bb800>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] I/O read operations metric tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc000fabda0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00345f900>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003d76ea0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc008a51880>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] I/O write operations metric tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc000ffc660>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc007990c30>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004479bc0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0126fcbe0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] storage read operation time metric tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0099844b0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00998bdb0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0036f4570>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0057e5480>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] storage read traffic in bytes metric tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc007412d20>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00947c000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0072724b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0058dca20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] storage write operation time metric tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc009d00510>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00795d040>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00558e000>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc004982ac0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] storage write traffic in bytes metric tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0094ebdd0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0095aa370>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00558fad0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc006785e00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include metrics for a running VM [test_id:4143] network metrics tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc008bf11a0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc007f1fb80>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008859e30>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc008a500e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include metrics for a running VM [test_id:4144] memory metrics tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc003d76cc0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00839c550>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00274a540>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc006493420>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include metrics for a running VM [test_id:4553] vcpu wait tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0096abce0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc002f3c320>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0049ee8a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc007eb5420>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include metrics for a running VM [test_id:4554] vcpu seconds tests/infrastructure/prometheus.go:213 Timed out after 10.860s. Unexpected error: <*rest.wrapPreviousError | 0xc0069a4ee0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:42306->127.0.0.1:36633: read: connection reset by peer { currentErr: <*url.Error | 0xc0008458c0>{ Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00345fe50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000b0b950>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0069a4ea0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc005ad3220>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc004853bc0>{IP: [127, 0, 0, 1], Port: 42306, Zone: ""}, Addr: <*net.TCPAddr | 0xc004853bf0>{IP: [127, 0, 0, 1], Port: 36633, Zone: ""}, Err: <*os.SyscallError | 0xc006878c40>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include metrics for a running VM [test_id:4556] vmi unused memory tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc00165c240>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00047c8c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001156510>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc00650a5e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4146]should include VMI phase metrics for all running VMs tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc008452fc0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00947d5e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009d01e60>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc019fe2ce0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints VMI eviction blocker status should include VMI eviction blocker status for all running VMs [test_id:4148] by IPv4 tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc005a835c0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0080dbe50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc006526d50>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0098f71e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints VMI eviction blocker status should include VMI eviction blocker status for all running VMs [test_id:6243] by IPv6 tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc019fba000>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00289c000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008858390>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc009164000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4147]should include kubernetes labels to VMI metrics tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc009b94960>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc007360c30>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0074f2f00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc009739e20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4555]should include swap metrics tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc009b804e0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00345fe00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000d19a10>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc006879760>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK VM creation should expose a VSOCK device Use virtio transitional tests/vmi_vsock_test.go:59 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0042b49f0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00947c4b0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0016888d0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc00650b640>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK VM creation should expose a VSOCK device Use virtio non-transitional tests/vmi_vsock_test.go:59 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc009d01b00>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc002502d70>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0090c9260>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0098f6580>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK Live migration should retain the CID for migration target tests/vmi_vsock_test.go:59 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc008bf1c20>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0066665a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc006526d80>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0077efce0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK communicating with VMI via VSOCK should succeed with TLS on both sides tests/vmi_vsock_test.go:59 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc002d2a1b0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc009bc4000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000ffca80>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc019f3c000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK communicating with VMI via VSOCK should succeed without TLS on both sides tests/vmi_vsock_test.go:59 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc00192f680>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00839ca50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007c42750>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc007eb5100>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK should return err if the port is invalid tests/vmi_vsock_test.go:59 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0058f06f0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00745f6d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009985cb0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc009739b20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK should return err if no app listerns on the port tests/vmi_vsock_test.go:59 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc001871d10>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc007991540>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0010bc270>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc004e8a9c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] Instancetype and Preferences with cluster memory overcommit being applied should apply memory overcommit instancetype to VMI even with cluster overcommit set tests/instancetype/instancetype.go:197 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc009a6a6c0>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc005ad3900>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009d00240>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc00650ad40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
AfterSuite tests/tests_suite_test.go:107 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc009d01b60>: Get "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:36633: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:36633/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0080db400>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000f8a8a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 36633, Zone: "", }, Err: <*os.SyscallError | 0xc0098f6c80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16806/pull-kubevirt-e2e-k8s-1.35-sig-compute-serial/2026868021871513600
Test Name Failure Message
[sig-compute]HostDevices with ephemeral disk with emulated PCI devices Should successfully passthrough 2 emulated PCI devices tests/vmi_hostdev_test.go:48 Timed out after 303.872s. One of the Kubevirt control-plane components is not ready. The function passed to Eventually failed at tests/testsuite/fixture.go:193 with: Unexpected error: <*rest.wrapPreviousError | 0xc0087c52e0>: Get "https://127.0.0.1:43365/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt": dial tcp 127.0.0.1:43365: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:49912->127.0.0.1:43365: read: connection reset by peer { currentErr: <*url.Error | 0xc0034cb7d0>{ Op: "Get", URL: "https://127.0.0.1:43365/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt", Err: <*net.OpError | 0xc0071a10e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0057caf90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43365, Zone: "", }, Err: <*os.SyscallError | 0xc0087c52a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc0055fa000>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc0057cab70>{IP: [127, 0, 0, 1], Port: 49912, Zone: ""}, Addr: <*net.TCPAddr | 0xc0057caba0>{IP: [127, 0, 0, 1], Port: 43365, Zone: ""}, Err: <*os.SyscallError | 0xc002e2f4c0>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred At one point, however, the function did return successfully. Yet, Eventually failed because the matcher was not satisfied: Expected <*v1.KubeVirt | 0xc004d6aa08>: { TypeMeta: { Kind: "KubeVirt", APIVersion: "kubevirt.io/v1", }, ObjectMeta: { Name: "kubevirt", GenerateName: "", Namespace: "kubevirt", SelfLink: "", UID: "6e0352b2-4080-4d59-9df7-feda5e15d7db", ResourceVersion: "74799", Generation: 133, CreationTimestamp: { Time: 2026-02-26T04:10:37Z, }, DeletionTimestamp: nil, DeletionGracePeriodSeconds: nil, Labels: nil, Annotations: { "kubevirt.io/latest-observed-api-version": "v1", "kubevirt.io/storage-observed-api-version": "v1", }, OwnerReferences: nil, Finalizers: [ "foregroundDeleteKubeVirt", ], ManagedFields: [ { Manager: "kubectl-create", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-26T04:10:37Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:spec\":{\".\":{},\"f:certificateRotateStrategy\":{},\"f:configuration\":{},\"f:customizeComponents\":{},\"f:imagePullPolicy\":{},\"f:workloadUpdateStrategy\":{}}}", }, Subresource: "", }, { Manager: "virt-operator", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-26T04:11:22Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:kubevirt.io/latest-observed-api-version\":{},\"f:kubevirt.io/storage-observed-api-version\":{}},\"f:finalizers\":{\".\":{},\"v:\\\"foregroundDeleteKubeVirt\\\"\":{}}}}", }, Subresource: "", }, { Manager: "virt-controller", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-26T04:12:22Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\"f:outdatedVirtualMachineInstanceWorkloads\":{}}}", }, Subresource: "status", }, { Manager: "tests.test", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-26T06:11:30Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:spec\":{\"f:configuration\":{\"f:changedBlockTrackingLabelSelectors\":{\".\":{},\"f:namespaceLabelSelector\":{},\"f:virtualMachineLabelSelector\":{}},\"f:developerConfiguration\":{},\"f:imagePullPolicy\":{},\"f:permittedHostDevices\":{},\"f:seccompConfiguration\":{\".\":{},\"f:virtualMachineInstanceProfile\":{\".\":{},\"f:customProfile\":{\".\":{},\"f:localhostProfile\":{}}}}}}}", }, Subresource: "", }, { Manager: "virt-operator", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-26T06:11:45Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\".\":{},\"f:conditions\":{},\"f:defaultArchitecture\":{},\"f:generations\":{},\"f:observedDeploymentConfi... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output to satisfy predicate <func(*v1.KubeVirt) bool>: 0x20a1d40 tests/testsuite/fixture.go:195
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations simple default clone tests/clone_test.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc003316780>: Get "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43365: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0006db130>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004202f00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43365, Zone: "", }, Err: <*os.SyscallError | 0xc0028c6660>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations simple clone with snapshot source, create clone before snapshot tests/clone_test.go:56 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc00998cb70>: Get "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43365: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00692a870>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004623230>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43365, Zone: "", }, Err: <*os.SyscallError | 0xc0044d8fe0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations clone with only some of labels/annotations tests/clone_test.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc007af8180>: Get "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43365: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc006e44870>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00585cc60>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43365, Zone: "", }, Err: <*os.SyscallError | 0xc007da1380>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations clone with only some of template.labels/template.annotations tests/clone_test.go:56 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc006a09f50>: Get "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43365: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0073b9c20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0057cb0b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43365, Zone: "", }, Err: <*os.SyscallError | 0xc004eb8ae0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations clone with changed MAC address tests/clone_test.go:56 Timed out after 10.722s. Unexpected error: <*rest.wrapPreviousError | 0xc0080035c0>: Get "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43365: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:59952->127.0.0.1:43365: read: connection reset by peer { currentErr: <*url.Error | 0xc0057cbbc0>{ Op: "Get", URL: "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0078a50e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004b37b30>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43365, Zone: "", }, Err: <*os.SyscallError | 0xc008003580>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc00834bb80>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc004b37770>{IP: [127, 0, 0, 1], Port: 59952, Zone: ""}, Addr: <*net.TCPAddr | 0xc004b377a0>{IP: [127, 0, 0, 1], Port: 43365, Zone: ""}, Err: <*os.SyscallError | 0xc008b7cb40>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations regarding domain Firmware clone with changed SMBios serial tests/clone_test.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc000d00ba0>: Get "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43365: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0096cec30>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0038f9950>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43365, Zone: "", }, Err: <*os.SyscallError | 0xc00069dca0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations regarding domain Firmware should strip firmware UUID tests/clone_test.go:56 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc00240a8a0>: Get "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43365: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0023bf7c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0037bb3e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43365, Zone: "", }, Err: <*os.SyscallError | 0xc00471b3a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure Node Restriction Should disallow to modify VMs on different node tests/infrastructure/security.go:49 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc00228fd40>: Get "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43365: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc002e91680>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0091784e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43365, Zone: "", }, Err: <*os.SyscallError | 0xc0073afb20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure changes to the kubernetes client on the controller rate limiter should lead to delayed VMI starts tests/infrastructure/k8s-client-changes.go:74 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc007c83980>: Get "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43365: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0038dd2c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00749d5c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43365, Zone: "", }, Err: <*os.SyscallError | 0xc004e1bcc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure changes to the kubernetes client on the virt handler rate limiter should lead to delayed VMI running states tests/infrastructure/k8s-client-changes.go:105 Should list compute nodeList Unexpected error: <*url.Error | 0xc006b17a10>: Get "https://127.0.0.1:43365/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue": dial tcp 127.0.0.1:43365: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43365/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue", Err: <*net.OpError | 0xc0073b8d70>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc006cb9a10>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43365, Zone: "", }, Err: <*os.SyscallError | 0xc003b19ea0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libnode/node.go:300
AfterSuite tests/tests_suite_test.go:107 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0027dd440>: Get "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43365: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43365/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00834a230>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004b36390>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43365, Zone: "", }, Err: <*os.SyscallError | 0xc0030deca0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16806/pull-kubevirt-e2e-k8s-1.35-sig-compute-serial/2024450969969889280
Test Name Failure Message
[sig-compute] Infrastructure cluster profiler for pprof data aggregation when ClusterProfiler configuration is enabled it should allow subresource access tests/infrastructure/cluster-profiler.go:61 Unexpected error: <*errors.StatusError | 0xc0080b5180>: an error on the server ("Internal error encountered: Get \"https://10.244.0.24:8443/dump-profiler\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)") has prevented the request from succeeding { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "an error on the server (\"Internal error encountered: Get \\\"https://10.244.0.24:8443/dump-profiler\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\") has prevented the request from succeeding", Reason: "InternalError", Details: { Name: "", Group: "", Kind: "", UID: "", Causes: [ { Type: "UnexpectedServerResponse", Message: "Internal error encountered: Get \"https://10.244.0.24:8443/dump-profiler\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)", Field: "", }, ], RetryAfterSeconds: 0, }, Code: 500, }, } occurred tests/infrastructure/cluster-profiler.go:72
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16786/pull-kubevirt-e2e-k8s-1.35-sig-compute-serial/2026343548059652096
Test Name Failure Message
[sig-compute] Infrastructure changes to the kubernetes client on the virt handler rate limiter should lead to delayed VMI running states tests/infrastructure/k8s-client-changes.go:105 Timed out after 300.000s. One of the Kubevirt control-plane components is not ready. The function passed to Eventually failed at tests/testsuite/fixture.go:193 with: Unexpected error: <*url.Error | 0xc00429bbf0>: Get "https://127.0.0.1:40925/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt": dial tcp 127.0.0.1:40925: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:40925/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt", Err: <*net.OpError | 0xc00750e690>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003d65230>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 40925, Zone: "", }, Err: <*os.SyscallError | 0xc0089ec800>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred At one point, however, the function did return successfully. Yet, Eventually failed because the matcher was not satisfied: Expected <*v1.KubeVirt | 0xc003ba3908>: { TypeMeta: { Kind: "KubeVirt", APIVersion: "kubevirt.io/v1", }, ObjectMeta: { Name: "kubevirt", GenerateName: "", Namespace: "kubevirt", SelfLink: "", UID: "38f4ebc3-3e34-4400-924d-abba82c39d1f", ResourceVersion: "74201", Generation: 127, CreationTimestamp: { Time: 2026-02-24T19:55:28Z, }, DeletionTimestamp: nil, DeletionGracePeriodSeconds: nil, Labels: nil, Annotations: { "kubevirt.io/latest-observed-api-version": "v1", "kubevirt.io/storage-observed-api-version": "v1", }, OwnerReferences: nil, Finalizers: [ "foregroundDeleteKubeVirt", ], ManagedFields: [ { Manager: "kubectl-create", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-24T19:55:28Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:spec\":{\".\":{},\"f:certificateRotateStrategy\":{},\"f:configuration\":{},\"f:customizeComponents\":{},\"f:imagePullPolicy\":{},\"f:workloadUpdateStrategy\":{}}}", }, Subresource: "", }, { Manager: "virt-operator", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-24T19:56:07Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:kubevirt.io/latest-observed-api-version\":{},\"f:kubevirt.io/storage-observed-api-version\":{}},\"f:finalizers\":{\".\":{},\"v:\\\"foregroundDeleteKubeVirt\\\"\":{}}}}", }, Subresource: "", }, { Manager: "virt-controller", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-24T19:57:01Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\"f:outdatedVirtualMachineInstanceWorkloads\":{}}}", }, Subresource: "status", }, { Manager: "virt-operator", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-24T21:53:14Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\".\":{},\"f:conditions\":{},\"f:defaultArchitecture\":{},\"f:generations\":{},\"f:observedDeploymentConfig\":{},\"f:observedDeploymentID\":{},\"f:observedGeneration\":{},\"f:observedKubeVirtRegistry\":{},\"f:observedKubeVirtVersion\":{},\"f:operatorVersion\":{},\"f:phase\":{},\"f:synchronizationAddresses\":{},\"f:targetDeploymentConfig\":{},\"f:targetDeploymentID\":{},\"f:targetKubeVirtRegistry\":{},\"f:targetKubeVirtVersion\":{}}}", }, Subresource: "status", }, { Manager: "tests.test", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-24T21:53:28Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:spec\":{\"f:configuration\":{\"f:changedBlockTrackingLabelSelectors\":{\".\":{},\"f:namespaceLabelSelector\":{},\"f:virtualMachineLabelSelector\":{}},\"f:developerConfiguration\":{\".\":{},\"f:featureGates\":{}},\"f:handlerConfiguration\":{\".\":{},\"f:restClient\":{\".\":{},\"f:rateLimiter\":{\".\":{},\"f:tokenBucketRateLimiter\":{\".\":{},\"f:burst\":{},\"f:qps\":{}}}}},\"f:imagePullPolicy\":{},\"f:seccompConfiguration\":{\".\":{},\"f:virtualMachineInstanceProfile\":{\".\":{},\"f:customProfile\":{\".\":{},\"f:localhostProfile\":{}}}}}}}", }, Subresource: "", }, ], }, Spec: { ImageTag: "", ImageRegistry: "", ImagePullPolicy: "IfNotPresent", ImagePullSecrets: nil, MonitorNamespace: "", ServiceMonitorNamespace: "", MonitorAccount: "", WorkloadUpdateStrategy: { WorkloadUpdateMethods: nil, BatchEvictionSize: nil, BatchEvictionInterval: nil, }, UninstallStrategy: "", CertificateRotationStrategy: {SelfSigned: nil}, ProductVersion: "", ProductName: "", ProductComponent: "", SynchronizationPort: "", Configuration: { CPUModel: "", CPURequest: nil, DeveloperConfiguration: { FeatureGates: [ "NodeRestriction", "CPUManager", "ExperimentalIgnitionSupport", "Sidecar", "Snapshot", "IncrementalBackup", "HostDisk", "EnableVirtioFsStorageVolumes", "DownwardMetrics", "ExpandDisks", "WorkloadEncryptionSEV", "VMExport", "KubevirtSeccompProfile", "ObjectGraph", "DeclarativeHotplugVolumes", "NodeRestriction", "DecentralizedLiveMigration", "PanicDevices", "VideoConfig", "UtilityVolumes", "MigrationPriorityQueue", "RebootPolicy", "ContainerPathVolumes", ], DisabledFeatureGates: nil, LessPVCSpaceToleration: 0, MinimumReservePVCBytes: 0, MemoryOvercommit: 0, NodeSelectors: nil, UseEmulation: false, CPUAllocationRatio: 0, MinimumClusterTSCFrequency: nil, DiskVerification: nil, LogVerbosity: nil, ClusterProfiler: false, }, EmulatedMachines: nil, ImagePullPolicy: "IfNotPresent", MigrationConfiguration: nil, MachineType: "", NetworkConfiguration: nil, OVMFPath: "", SELinuxLauncherType: "", DefaultRuntimeClass: "", SMBIOSConfig: nil, ArchitectureConfiguration: nil, EvictionStrategy: nil, AdditionalGuestMemoryOverheadRatio: nil, SupportContainerResources: nil, SupportedGuestAgentVersions: nil, MemBalloonStatsPeriod: nil, PermittedHostDevices: nil, MediatedDevicesConfiguration: nil, DeprecatedMinCPUModel: "", ObsoleteCPUModels: nil, VirtualMachineInstancesPerNode: nil, APIConfiguration: nil, WebhookConfiguration: nil, ControllerConfiguration: nil, HandlerConfiguration: { RestClient: { RateLimiter: { TokenBucketRateLimiter: {QPS: 1, Burst: 1}, }, }, }, TLSConfiguration: nil, SeccompConfiguration: { VirtualMachineInstanceProfile: { CustomProfile: { LocalhostProfile: "kubevirt/kubevirt.json", RuntimeDefaultProfile: false, }, }, }, VMStateStorageClass: "", VirtualMachineOptions: nil, KSMConfiguration: nil, AutoCPULimitNamespaceLabelSelector: nil, LiveUpdateConfiguration: nil, VMRolloutStrategy: nil, CommonInstancetypesDeployment: nil, VirtTemplateDeployment: nil, Instancetype: nil, Hypervisors: nil, ChangedBlockTrackingLabelSelectors: { NamespaceLabelSelector: { MatchLabels: { "changedBlockTracking": "true", }, MatchExpressions: nil, }, VirtualMachineLabelSelector: { MatchLabels: { "changedBlockTracking": "true", }, MatchExpressions: nil, }, }, }, Infra: nil, Workloads: nil, CustomizeComponents: {Patches: nil, Flags: nil}, }, Status: { Phase: "Deployed", Conditions: [ { Type: "Available", Status: "True", LastProbeTime: { Time: 2026-02-24T21:53:10Z, }, LastTransitionTime: { Time: 2026-02-24T21:53:10Z, }, Reason: "AllComponentsReady", Message: "All components are ready.", }, { Type: "Progressing", Status: "False", LastProbeTime: { Time: 2026-02-24T21:53:10Z, }, LastTransitionTime: { Time: 2026-02-24T21:53:10Z, }, Reason: "AllComponentsReady", Message: "All components are ready.", }, { Type: "Degraded", Status: "False", LastProbeTime: { Time: 2026-02-24T21:53:10Z, }, LastTransitionTime: { Time: 2026-02-24T21:53:10Z, }, Reason: "AllComponentsReady", Message: "All components are ready.", }, { Type: "Created", Status: "True", LastProbeTime: { Time: 2026-02-24T19:56:56Z, }, LastTransitionTime: { Time: 0001-01-01T00:00:00Z, }, Reason: "AllResourcesCreated", Message: "All resources were created.", }, ], OperatorVersion: "v1.8.0-beta.0.323+4377d38e94a51c", TargetKubeVirtRegistry: "registry:5000/kubevirt", TargetKubeVirtVersion: "devel", TargetDeploymentConfig: "{\"id\":\"14c07b657a87bc1803569b384655fed24bb172dc\",\"namespace\":\"kubevirt\",\"registry\":\"registry:5000/kubevirt\",\"kubeVirtVersion\":\"devel\",\"virtOperatorImage\":\"registry:5000/kubevirt/virt-operator:devel\",\"additionalProperties\":{\"CertificateRotationStrategy\":\"\\u003cv1.KubeVirtCertificateRotateStrategy Value\\u003e\",\"Configuration\":\"\\u003cv1.KubeVirtConfiguration Value\\u003e\",\"CustomizeComponents\":\"\\u003cv1.CustomizeComponents Value\\u003e\",\"HypervisorName\":\"kvm\",\"ImagePullPolicy\":\"IfNotPresent\",\"ImagePullSecrets\":\"null\",\"Infra\":\"\\u003c*v1.ComponentConfig Value\\u003e\",\"MonitorAccount\":\"\",\"MonitorNamespace\":\"\",\"ProductComponent\":\"\",\"ProductName\":\"\",\"ProductVersion\":\"\",\"ServiceMonitorNamespace\":\"\",\"SynchronizationPort\":\"\",\"UninstallStrategy\":\"\",\"WorkloadUpdateStrategy\":\"\\u003cv1.KubeVirtWorkloadUpdateStrategy Value\\u003e\",\"Workloads\":\"\\u003c*v1.ComponentConfig Value\\u003e\"}}", TargetDeploymentID: "14c07b657a87bc1803569b384655fed24bb172dc", ObservedKubeVirtRegistry: "registry:5000/kubevirt", ObservedKubeVirtVersion: "devel", ObservedDeploymentConfig: "{\"id\":\"14c07b657a87bc1803569b384655fed24bb172dc\",\"namespace\":\"kubevirt\",\"registry\":\"registry:5000/kubevirt\",\"kubeVirtVersion\":\"devel\",\"virtOperatorImage\":\"registry:5000/kubevirt/virt-operator:devel\",\"additionalProperties\":{\"CertificateRotationStrategy\":\"\\u003cv1.KubeVirtCertificateRotateStrategy Value\\u003e\",\"Configuration\":\"\\u003cv1.KubeVirtConfiguration Value\\u003e\",\"CustomizeComponents\":\"\\u003cv1.CustomizeComponents Value\\u003e\",\"HypervisorName\":\"kvm\",\"ImagePullPolicy\":\"IfNotPresent\",\"ImagePullSecrets\":\"null\",\"Infra\":\"\\u003c*v1.ComponentConfig Value\\u003e\",\"MonitorAccount\":\"\",\"MonitorNamespace\":\"\",\"ProductComponent\":\"\",\"ProductName\":\"\",\"ProductVersion\":\"\",\"ServiceMonitorNamespace\":\"\",\"SynchronizationPort\":\"\",\"UninstallStrategy\":\"\",\"WorkloadUpdateStrategy\":\"\\u003cv1.KubeVirtWorkloadUpdateStrategy Value\\u003e\",\"Workloads\":\"\\u003c*v1.ComponentConfig Value\\u003e\"}}", ObservedDeploymentID: "14c07b657a87bc1803569b384655fed24bb172dc", OutdatedVirtualMachineInstanceWorkloads: 0, ObservedGeneration: 126, DefaultArchitecture: "amd64", Generations: [ { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineinstances.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineinstancepresets.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineinstancereplicasets.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachines.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineinstancemigrations.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinesnapshots.snapshot.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinesnapshotcontents.snapshot.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinerestores.snapshot.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineinstancetypes.instancetype.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineclusterinstancetypes.instancetype.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinepools.pool.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "migrationpolicies.migrations.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinepreferences.instancetype.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineclusterpreferences.instancetype.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineexports.export.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineclones.clone.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinebackups.backup.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinebackuptrackers.backup.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "admissionregistration.k8s.io", Resource: "validatingwebhookconfigurations", Namespace: "", Name: "virt-operator-validator", LastGeneration: 206, Hash: "", }, { Group: "admissionregistration.k8s.io", Resource: "validatingwebhookconfigurations", Namespace: "", Name: "virt-api-validator", LastGeneration: 206, Hash: "", }, { Group: "admissionregistration.k8s.io", Resource: "mutatingwebhookconfigurations", Namespace: "", Name: "virt-api-mutator", LastGeneration: 206, Hash: "", }, { Group: "apps", Resource: "deployments", Namespace: "kubevirt", Name: "virt-api", LastGeneration: 127, Hash: "", }, { Group: "apps", Resource: "poddisruptionbudgets", Namespace: "kubevirt", Name: "virt-api-pdb", LastGeneration: 1, Hash: "", }, { Group: "apps", Resource: "deployments", Namespace: "kubevirt", Name: "virt-controller", LastGeneration: 125, Hash: "", }, { Group: "apps", Resource: "poddisruptionbudgets", Namespace: "kubevirt", Name: "virt-controller-pdb", LastGeneration: 1, Hash: "", }, { Group: "apps", Resource: "daemonsets", Namespace: "kubevirt", Name: "virt-handler", LastGeneration: 3, Hash: "", }, { Group: "admissionregistration.k8s.io", Resource: "mutatingwebhookconfigurations", Namespace: "", Name: "virt-launcher-pod-mutator", LastGeneration: 47, Hash: "", }, { Group: "apps", Resource: "deployments", Namespace: "kubevirt", Name: "virt-exportproxy", LastGeneration: 23, Hash: "", }, { Group: "apps", Resource: "poddisruptionbudgets", Namespace: "kubevirt", Name: "virt-exportproxy-pdb", LastGeneration: 1, Hash: "", }, { Group: "apps", Resource: "deployments", Namespace: "kubevirt", Name: "virt-synchronization-controller", LastGeneration: 23, Hash: "", }, { Group: "apps", Resource: "poddisruptionbudgets", Namespace: "kubevirt", Name: "virt-synchronization-controller-pdb", LastGeneration: 1, Hash: "", }, ], SynchronizationAddresses: ["10.244.0.38:9185", "fd10:244::26:9185"], }, } to satisfy predicate <func(*v1.KubeVirt) bool>: 0x20a1d40 tests/testsuite/fixture.go:195
[sig-compute]HookSidecars [rfe_id:2667][crit:medium][vendor:cnv-qe@redhat.com][level:component] VMI definition set sidecar resources [test_id:3155]should successfully start with hook sidecar annotation tests/vmi_hook_sidecar_test.go:93 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0061a6090>: Get "https://127.0.0.1:40925/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:40925: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:40925/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc009d394f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0090634a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 40925, Zone: "", }, Err: <*os.SyscallError | 0xc00863f320>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]HookSidecars [rfe_id:2667][crit:medium][vendor:cnv-qe@redhat.com][level:component] VMI definition with sidecar feature gate disabled [test_id:2666]should not start with hook sidecar annotation tests/vmi_hook_sidecar_test.go:292 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc002d1d770>: Get "https://127.0.0.1:40925/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:40925: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:40925/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0071fe2d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005807140>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 40925, Zone: "", }, Err: <*os.SyscallError | 0xc007e01480>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]HostDevices with ephemeral disk with emulated PCI devices Should successfully passthrough an emulated PCI device tests/vmi_hostdev_test.go:42 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc008d80000>: Get "https://127.0.0.1:40925/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:40925: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:40925/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00a71f8b0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00639f200>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 40925, Zone: "", }, Err: <*os.SyscallError | 0xc021ba0d40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]HostDevices with ephemeral disk with emulated PCI devices Should successfully passthrough 2 emulated PCI devices tests/vmi_hostdev_test.go:42 Timed out after 10.581s. Unexpected error: <*rest.wrapPreviousError | 0xc004c81ca0>: Get "https://127.0.0.1:40925/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:40925: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:50902->127.0.0.1:40925: read: connection reset by peer { currentErr: <*url.Error | 0xc006030e70>{ Op: "Get", URL: "https://127.0.0.1:40925/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc006d47040>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001aa5530>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 40925, Zone: "", }, Err: <*os.SyscallError | 0xc004c81c20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc021b99cc0>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc001aa4ba0>{IP: [127, 0, 0, 1], Port: 50902, Zone: ""}, Addr: <*net.TCPAddr | 0xc001aa4c90>{IP: [127, 0, 0, 1], Port: 40925, Zone: ""}, Err: <*os.SyscallError | 0xc00802cdc0>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]virt-handler multiple HTTP calls should re-use connections and not grow the number of open connections tests/virt-handler_test.go:48 Test Panicked tests/libnet/cloudinit/cloudinit.go:192 Panic: failed defining network data when running options: failed defining network data ethernet device when running options: failed defining network data nameservers when retrieving cluster DNS service IP: unable to detect the DNS services: Get "https://127.0.0.1:40925/api/v1/namespaces/kube-system/services/kube-dns": dial tcp 127.0.0.1:40925: connect: connection refused, Get "https://127.0.0.1:40925/api/v1/namespaces/openshift-dns/services/dns-default": dial tcp 127.0.0.1:40925: connect: connection refused Full stack: kubevirt.io/kubevirt/tests/libnet/cloudinit.CreateDefaultCloudInitNetworkData() tests/libnet/cloudinit/cloudinit.go:192 +0x154 kubevirt.io/kubevirt/tests/libnet.WithMasqueradeNetworking(...) tests/libnet/vmibuilder.go:32 tests/go_default_test_test.init.func23.1() tests/virt-handler_test.go:95 +0x3a
AfterSuite tests/tests_suite_test.go:107 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0029d0cf0>: Get "https://127.0.0.1:40925/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:40925: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:40925/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc009f1da40>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004b9ac60>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 40925, Zone: "", }, Err: <*os.SyscallError | 0xc0062081e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16662/pull-kubevirt-e2e-k8s-1.35-sig-compute-serial/2024658191056375808
Test Name Failure Message
[rfe_id:1177][crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute]VirtualMachine when node becomes unhealthy the VMs running in that node should be respawned tests/vm_test.go:991 Timed out after 120.001s. The function passed to Eventually failed at tests/vm_test.go:1003 with: Expected <[]interface {} | len:5, cap:5>: [ <map[string]interface {} | len:5>{ "type": <string>"PodReadyToStartContainers", "observedGeneration": <int64>1, "status": <string>"True", "lastProbeTime": nil, "lastTransitionTime": <string>"2026-02-20T03:11:18Z", }, <map[string]interface {} | len:5>{ "observedGeneration": <int64>1, "status": <string>"True", "lastProbeTime": nil, "lastTransitionTime": <string>"2026-02-20T03:11:19Z", "type": <string>"Initialized", }, <map[string]interface {} | len:5>{ "lastTransitionTime": <string>"2026-02-20T03:29:41Z", "type": <string>"Ready", "observedGeneration": <int64>1, "status": <string>"True", "lastProbeTime": nil, }, <map[string]interface {} | len:5>{ "lastProbeTime": nil, "lastTransitionTime": <string>"2026-02-20T03:29:41Z", "type": <string>"ContainersReady", "observedGeneration": <int64>1, "status": <string>"True", }, <map[string]interface {} | len:5>{ "observedGeneration": <int64>1, "status": <string>"True", "lastProbeTime": nil, "lastTransitionTime": <string>"2026-02-20T03:11:16Z", "type": <string>"PodScheduled", }, ] to find condition of type 'Ready' and status 'False' but got 'True' tests/vm_test.go:1004
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16659/pull-kubevirt-e2e-k8s-1.35-sig-compute-serial/2026279525918183424
Test Name Failure Message
[sig-compute] Infrastructure cluster profiler for pprof data aggregation when ClusterProfiler configuration is enabled it should allow subresource access tests/infrastructure/cluster-profiler.go:61 Unexpected error: <*errors.StatusError | 0xc002120f00>: an error on the server ("Internal error encountered: context deadline exceeded (Client.Timeout or context cancellation while reading body)") has prevented the request from succeeding { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "an error on the server (\"Internal error encountered: context deadline exceeded (Client.Timeout or context cancellation while reading body)\") has prevented the request from succeeding", Reason: "InternalError", Details: { Name: "", Group: "", Kind: "", UID: "", Causes: [ { Type: "UnexpectedServerResponse", Message: "Internal error encountered: context deadline exceeded (Client.Timeout or context cancellation while reading body)", Field: "", }, ], RetryAfterSeconds: 0, }, Code: 500, }, } occurred tests/infrastructure/cluster-profiler.go:72
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16528/pull-kubevirt-e2e-k8s-1.35-sig-compute-serial/2024673866105753600
Test Name Failure Message
[sig-compute]VSOCK Live migration should retain the CID for migration target tests/vmi_vsock_test.go:59 Timed out after 305.368s. One of the Kubevirt control-plane components is not ready. The function passed to Eventually failed at tests/testsuite/fixture.go:193 with: Unexpected error: <*rest.wrapPreviousError | 0xc007771160>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt": dial tcp 127.0.0.1:44083: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46826->127.0.0.1:44083: read: connection reset by peer { currentErr: <*url.Error | 0xc00470f470>{ Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt", Err: <*net.OpError | 0xc00660ca00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007141470>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc007771120>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc007f24730>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc007141140>{IP: [127, 0, 0, 1], Port: 46826, Zone: ""}, Addr: <*net.TCPAddr | 0xc007141170>{IP: [127, 0, 0, 1], Port: 44083, Zone: ""}, Err: <*os.SyscallError | 0xc008fbc780>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred At one point, however, the function did return successfully. Yet, Eventually failed because the matcher was not satisfied: Expected <[]interface {} | len:4, cap:4>: [ <map[string]interface {} | len:6>{ "reason": <string>"DeploymentInProgress", "message": <string>"Deploying version devel with registry registry:5000/kubevirt", "type": <string>"Available", "status": <string>"False", "lastProbeTime": <string>"2026-02-20T04:19:45Z", "lastTransitionTime": <string>"2026-02-20T04:19:45Z", }, <map[string]interface {} | len:6>{ "message": <string>"Deploying version devel with registry registry:5000/kubevirt", "type": <string>"Progressing", "status": <string>"True", "lastProbeTime": <string>"2026-02-20T04:19:45Z", "lastTransitionTime": <string>"2026-02-20T04:19:45Z", "reason": <string>"DeploymentInProgress", }, <map[string]interface {} | len:6>{ "status": <string>"False", "lastProbeTime": <string>"2026-02-20T04:19:45Z", "lastTransitionTime": <string>"2026-02-20T04:19:45Z", "reason": <string>"DeploymentInProgress", "message": <string>"Deploying version devel with registry registry:5000/kubevirt", "type": <string>"Degraded", }, <map[string]interface {} | len:6>{ "status": <string>"True", "lastProbeTime": <string>"2026-02-20T02:49:25Z", "lastTransitionTime": nil, "reason": <string>"AllResourcesCreated", "message": <string>"All resources were created.", "type": <string>"Created", }, ] to find condition of type 'Available' and status 'True' but got 'False' tests/testsuite/fixture.go:195
[sig-compute]VSOCK communicating with VMI via VSOCK should succeed with TLS on both sides tests/vmi_vsock_test.go:59 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc006984030>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc008b05450>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000af7e00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc0087fb360>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK communicating with VMI via VSOCK should succeed without TLS on both sides tests/vmi_vsock_test.go:59 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc005f96ba0>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0058f43c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003d074a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc00933c760>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK should return err if the port is invalid tests/vmi_vsock_test.go:59 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc007bf2690>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0005a8370>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005fd2c90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc0071343a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VSOCK should return err if no app listerns on the port tests/vmi_vsock_test.go:59 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc00629a390>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc007f24f00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007217800>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc0077573c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[ref_id:2717][sig-compute]KubeVirt control plane resilience pod eviction evicting pods of control plane [test_id:2830]last eviction should fail for multi-replica virt-controller pods tests/virt_control_plane_test.go:135 Should list compute nodeList Unexpected error: <*url.Error | 0xc00980a510>: Get "https://127.0.0.1:44083/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue": net/http: TLS handshake timeout { Op: "Get", URL: "https://127.0.0.1:44083/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue", Err: <http.tlsHandshakeTimeoutError>{}, } occurred tests/libnode/node.go:300
[ref_id:2717][sig-compute]KubeVirt control plane resilience pod eviction evicting pods of control plane [test_id:2799]last eviction should fail for multi-replica virt-api pods tests/virt_control_plane_test.go:135 Should list compute nodeList Unexpected error: <*url.Error | 0xc002904d20>: Get "https://127.0.0.1:44083/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue", Err: <*net.OpError | 0xc008cd4cd0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003d89ad0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc008d6fc40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libnode/node.go:300
[ref_id:2717][sig-compute]KubeVirt control plane resilience control plane components check when control plane pods are running [test_id:2806]virt-controller and virt-api pods have a pod disruption budget tests/virt_control_plane_test.go:180 Unexpected error: <*url.Error | 0xc002904de0>: Get "https://127.0.0.1:44083/apis/apps/v1/namespaces/kubevirt/deployments": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/apps/v1/namespaces/kubevirt/deployments", Err: <*net.OpError | 0xc007f24c80>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0026aaf90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc007756580>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/virt_control_plane_test.go:184
[ref_id:2717][sig-compute]KubeVirt control plane resilience control plane components check when Control plane pods temporarily lose connection to Kubernetes API should fail health checks when connectivity is lost, and recover when connectivity is regained tests/virt_control_plane_test.go:240 Unexpected error: <*url.Error | 0xc00470fa10>: Get "https://127.0.0.1:44083/apis/apps/v1/namespaces/kubevirt/daemonsets/virt-handler": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/apps/v1/namespaces/kubevirt/daemonsets/virt-handler", Err: <*net.OpError | 0xc007011040>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00629a630>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc007770240>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/virt_control_plane_test.go:241
[sig-compute]SecurityFeatures Check virt-launcher securityContext With selinuxLauncherType as container_t [test_id:2953][test_id:2895]Ensure virt-launcher pod securityContext type is correctly set and not privileged tests/security_features_test.go:65 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0056dbda0>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc007dccf00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0072174a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc0048b6380>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]SecurityFeatures Check virt-launcher securityContext With selinuxLauncherType as container_t [test_id:4297]Make sure qemu processes are MCS constrained tests/security_features_test.go:65 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc004804840>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc003cafcc0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003360e70>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc009806fe0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]SecurityFeatures Check virt-launcher securityContext With selinuxLauncherType defined as spc_t [test_id:3787]Should honor custom SELinux type for virt-launcher tests/security_features_test.go:65 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0052f6360>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0073fe500>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00183d1a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc0079801a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[rfe_id:1177][crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute]VirtualMachine when node becomes unhealthy the VMs running in that node should be respawned tests/vm_test.go:991 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc009863c80>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc000724f50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0039cf080>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc008d6fea0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VM Rollout Strategy When using the Stage rollout strategy [test_id:11207]should set RestartRequired when changing any spec field tests/hotplug/rolloutstrategy.go:38 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc00266c330>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc009821a40>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0025c64b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc005680c80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] virt-handler node restrictions via validatingAdmissionPolicy reject not allowed patches to node tests/validatingadmissionpolicy/noderestrictions.go:63 Unexpected error: <*url.Error | 0xc0017d5140>: Get "https://127.0.0.1:44083/api?timeout=32s": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/api?timeout=32s", Err: <*net.OpError | 0xc000364af0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0056da4b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc005681960>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/validatingadmissionpolicy/noderestrictions.go:66
[sig-compute] virt-handler node restrictions via validatingAdmissionPolicy allow kubevirt related patches to node tests/validatingadmissionpolicy/noderestrictions.go:63 Unexpected error: <*url.Error | 0xc007140c60>: Get "https://127.0.0.1:44083/api?timeout=32s": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/api?timeout=32s", Err: <*net.OpError | 0xc008b04640>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00088f800>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc00726ea80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/validatingadmissionpolicy/noderestrictions.go:66
[sig-compute] virt-handler node restrictions via validatingAdmissionPolicy patching another node rejects kubevirt related patches tests/validatingadmissionpolicy/noderestrictions.go:63 Unexpected error: <*url.Error | 0xc0012c2db0>: Get "https://127.0.0.1:44083/api?timeout=32s": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/api?timeout=32s", Err: <*net.OpError | 0xc00807e0f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00365f650>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc0087fab20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/validatingadmissionpolicy/noderestrictions.go:66
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] InstancetypeReferencePolicy should result in running VirtualMachine when set to reference tests/instancetype/reference_policy.go:96 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0056da000>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc007dcc000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009888330>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc008fbc000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] InstancetypeReferencePolicy should result in running VirtualMachine when set to expand tests/instancetype/reference_policy.go:97 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc007bf2510>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0025b94f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002630120>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc0074016a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] InstancetypeReferencePolicy should result in running VirtualMachine when set to expandAll tests/instancetype/reference_policy.go:98 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0026aa2a0>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc004e0cff0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0037aa7e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc007a65a40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]virt-handler multiple HTTP calls should re-use connections and not grow the number of open connections tests/virt-handler_test.go:48 Test Panicked tests/libnet/cloudinit/cloudinit.go:192 Panic: failed defining network data when running options: failed defining network data ethernet device when running options: failed defining network data nameservers when retrieving cluster DNS service IP: unable to detect the DNS services: Get "https://127.0.0.1:44083/api/v1/namespaces/kube-system/services/kube-dns": dial tcp 127.0.0.1:44083: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:34600->127.0.0.1:44083: read: connection reset by peer, Get "https://127.0.0.1:44083/api/v1/namespaces/openshift-dns/services/dns-default": dial tcp 127.0.0.1:44083: connect: connection refused Full stack: kubevirt.io/kubevirt/tests/libnet/cloudinit.CreateDefaultCloudInitNetworkData() tests/libnet/cloudinit/cloudinit.go:192 +0x154 kubevirt.io/kubevirt/tests/libnet.WithMasqueradeNetworking(...) tests/libnet/vmibuilder.go:32 tests/go_default_test_test.init.func23.1() tests/virt-handler_test.go:95 +0x3a
[sig-compute] Infrastructure [rfe_id:4126][crit:medium][vendor:cnv-qe@redhat.com][level:component]Taints and toleration CriticalAddonsOnly taint set on a node [test_id:4134] kubevirt components on that node should not evict tests/infrastructure/taints-and-tolerations.go:59 failed listing kubevirt pods Unexpected error: <*url.Error | 0xc00547ee70>: Get "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/pods": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/pods", Err: <*net.OpError | 0xc000643cc0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0052f6b10>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc0077571a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/taints-and-tolerations.go:65
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates [test_id:4099] should be rotated when a new CA is created tests/infrastructure/certificates.go:69 Unexpected error: <*url.Error | 0xc0052f71d0>: Get "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/configmaps/kubevirt-ca": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/configmaps/kubevirt-ca", Err: <*net.OpError | 0xc0073ff810>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00470ecc0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc003f7bc00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libinfra/certificates.go:56
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates [sig-compute][test_id:4100] should be valid during the whole rotation process tests/infrastructure/certificates.go:136 Unexpected error: <*url.Error | 0xc0053e6ea0>: Get "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/pods?labelSelector=kubevirt.io%3Dvirt-api": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/pods?labelSelector=kubevirt.io%3Dvirt-api", Err: <*net.OpError | 0xc00923e4b0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00470fe60>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc007980a00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libpod/certs.go:51
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4101] virt-operator tests/infrastructure/certificates.go:188 Unexpected error: <*url.Error | 0xc00251b0e0>: Patch "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/secrets/kubevirt-operator-certs": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/secrets/kubevirt-operator-certs", Err: <*net.OpError | 0xc000365270>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000b89bf0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc003a3fb80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4103] virt-api tests/infrastructure/certificates.go:189 Unexpected error: <*url.Error | 0xc000b643f0>: Patch "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-api-certs": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-api-certs", Err: <*net.OpError | 0xc0005a5860>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005f970b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc007770220>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4104] virt-controller tests/infrastructure/certificates.go:190 Unexpected error: <*url.Error | 0xc0045760c0>: Patch "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/secrets/kubevirt-controller-certs": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/secrets/kubevirt-controller-certs", Err: <*net.OpError | 0xc00d388050>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003ee3cb0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc00726e000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4105] virt-handlers client side tests/infrastructure/certificates.go:191 Unexpected error: <*url.Error | 0xc003a76900>: Patch "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-certs": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-certs", Err: <*net.OpError | 0xc00917f4f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004e6f080>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc008fbcfc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4106] virt-handlers server side tests/infrastructure/certificates.go:192 Unexpected error: <*url.Error | 0xc001f0a720>: Patch "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-server-certs": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:44083/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-server-certs", Err: <*net.OpError | 0xc003caf720>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0042b31a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc0074012e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations simple default clone tests/clone_test.go:56 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc00183c870>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00047c9b0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00638b0e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc007a64d60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations simple clone with snapshot source, create clone before snapshot tests/clone_test.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc003fdf710>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc008cd5590>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0009480c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc0082e2040>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations clone with only some of labels/annotations tests/clone_test.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc00980b470>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0073fed70>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0083d3f50>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc003d45b60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations clone with only some of template.labels/template.annotations tests/clone_test.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc002deb740>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc000365ae0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007482d20>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc0007737a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations clone with changed MAC address tests/clone_test.go:56 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0070b27b0>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc009820a00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0039fa810>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc0079804c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations regarding domain Firmware clone with changed SMBios serial tests/clone_test.go:56 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc001501980>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0016bb630>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003d07140>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc00726ed00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations regarding domain Firmware should strip firmware UUID tests/clone_test.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc00399f920>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0083db400>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0062f9380>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc0077703a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]HostDevices with ephemeral disk with emulated PCI devices Should successfully passthrough an emulated PCI device tests/vmi_hostdev_test.go:42 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0082c5290>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc004e0c870>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00426b200>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc007efc500>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]HostDevices with ephemeral disk with emulated PCI devices Should successfully passthrough 2 emulated PCI devices tests/vmi_hostdev_test.go:42 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc00547e840>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc000643360>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00638b650>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc008c00380>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure changes to the kubernetes client on the controller rate limiter should lead to delayed VMI starts tests/infrastructure/k8s-client-changes.go:74 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0052f6d80>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0073fe500>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00980abd0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc003f7a4e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure changes to the kubernetes client on the virt handler rate limiter should lead to delayed VMI running states tests/infrastructure/k8s-client-changes.go:105 Should list compute nodeList Unexpected error: <*rest.wrapPreviousError | 0xc003d448c0>: Get "https://127.0.0.1:44083/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue": dial tcp 127.0.0.1:44083: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:38236->127.0.0.1:44083: read: connection reset by peer { currentErr: <*url.Error | 0xc0056da4e0>{ Op: "Get", URL: "https://127.0.0.1:44083/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue", Err: <*net.OpError | 0xc0098207d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0018dd530>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc003d446e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc005d0c780>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc0018dc3f0>{IP: [127, 0, 0, 1], Port: 38236, Zone: ""}, Addr: <*net.TCPAddr | 0xc0018dc420>{IP: [127, 0, 0, 1], Port: 44083, Zone: ""}, Err: <*os.SyscallError | 0xc0037cbc20>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred tests/libnode/node.go:300
[sig-compute] Infrastructure tls configuration [test_id:9306]should result only connections with the correct client-side tls configurations are accepted by the components tests/infrastructure/tls-configuration.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc003360270>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00917f590>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00392ac00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc00726f900>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Hyper-V enlightenments VMI with HyperV re-enlightenment enabled when TSC frequency is not exposed on the cluster Should start successfully and be marked as non-migratable tests/hyperv_test.go:53 Test Panicked tests/libnet/cloudinit/cloudinit.go:192 Panic: failed defining network data when running options: failed defining network data ethernet device when running options: failed defining network data nameservers when retrieving cluster DNS service IP: unable to detect the DNS services: Get "https://127.0.0.1:44083/api/v1/namespaces/kube-system/services/kube-dns": dial tcp 127.0.0.1:44083: connect: connection refused, Get "https://127.0.0.1:44083/api/v1/namespaces/openshift-dns/services/dns-default": dial tcp 127.0.0.1:44083: connect: connection refused Full stack: kubevirt.io/kubevirt/tests/libnet/cloudinit.CreateDefaultCloudInitNetworkData() tests/libnet/cloudinit/cloudinit.go:192 +0x154 kubevirt.io/kubevirt/tests/libnet.WithMasqueradeNetworking(...) tests/libnet/vmibuilder.go:32 tests/go_default_test_test.init.func7.2.1(...) tests/hyperv_test.go:50 tests/go_default_test_test.init.func7.2.2() tests/hyperv_test.go:54 +0x25
[sig-compute] Hyper-V enlightenments VMI with HyperV re-enlightenment enabled the vmi with EVMCS HyperV feature should have correct HyperV and cpu features auto filled hyperv and cpu features should be auto filled when EVMCS is enabled tests/hyperv_test.go:53 Test Panicked tests/libnet/cloudinit/cloudinit.go:192 Panic: failed defining network data when running options: failed defining network data ethernet device when running options: failed defining network data nameservers when retrieving cluster DNS service IP: unable to detect the DNS services: Get "https://127.0.0.1:44083/api/v1/namespaces/kube-system/services/kube-dns": dial tcp 127.0.0.1:44083: connect: connection refused, Get "https://127.0.0.1:44083/api/v1/namespaces/openshift-dns/services/dns-default": dial tcp 127.0.0.1:44083: connect: connection refused Full stack: kubevirt.io/kubevirt/tests/libnet/cloudinit.CreateDefaultCloudInitNetworkData() tests/libnet/cloudinit/cloudinit.go:192 +0x154 kubevirt.io/kubevirt/tests/libnet.WithMasqueradeNetworking(...) tests/libnet/vmibuilder.go:32 tests/go_default_test_test.init.func7.2.1(...) tests/hyperv_test.go:50 tests/go_default_test_test.init.func7.2.2() tests/hyperv_test.go:54 +0x25
[sig-compute] Hyper-V enlightenments VMI with HyperV re-enlightenment enabled the vmi with EVMCS HyperV feature should have correct HyperV and cpu features auto filled EVMCS should be enabled when vmi.Spec.Domain.Features.Hyperv.EVMCS is set but the EVMCS.Enabled field is nil tests/hyperv_test.go:53 Test Panicked tests/libnet/cloudinit/cloudinit.go:192 Panic: failed defining network data when running options: failed defining network data ethernet device when running options: failed defining network data nameservers when retrieving cluster DNS service IP: unable to detect the DNS services: Get "https://127.0.0.1:44083/api/v1/namespaces/kube-system/services/kube-dns": dial tcp 127.0.0.1:44083: connect: connection refused, Get "https://127.0.0.1:44083/api/v1/namespaces/openshift-dns/services/dns-default": dial tcp 127.0.0.1:44083: connect: connection refused Full stack: kubevirt.io/kubevirt/tests/libnet/cloudinit.CreateDefaultCloudInitNetworkData() tests/libnet/cloudinit/cloudinit.go:192 +0x154 kubevirt.io/kubevirt/tests/libnet.WithMasqueradeNetworking(...) tests/libnet/vmibuilder.go:32 tests/go_default_test_test.init.func7.2.1(...) tests/hyperv_test.go:50 tests/go_default_test_test.init.func7.2.2() tests/hyperv_test.go:54 +0x25
[sig-compute] Hyper-V enlightenments VMI with HyperV re-enlightenment enabled the vmi with EVMCS HyperV feature should have correct HyperV and cpu features auto filled Verify that features aren't applied when enabled is false tests/hyperv_test.go:53 Test Panicked tests/libnet/cloudinit/cloudinit.go:192 Panic: failed defining network data when running options: failed defining network data ethernet device when running options: failed defining network data nameservers when retrieving cluster DNS service IP: unable to detect the DNS services: Get "https://127.0.0.1:44083/api/v1/namespaces/kube-system/services/kube-dns": dial tcp 127.0.0.1:44083: connect: connection refused, Get "https://127.0.0.1:44083/api/v1/namespaces/openshift-dns/services/dns-default": dial tcp 127.0.0.1:44083: connect: connection refused Full stack: kubevirt.io/kubevirt/tests/libnet/cloudinit.CreateDefaultCloudInitNetworkData() tests/libnet/cloudinit/cloudinit.go:192 +0x154 kubevirt.io/kubevirt/tests/libnet.WithMasqueradeNetworking(...) tests/libnet/vmibuilder.go:32 tests/go_default_test_test.init.func7.2.1(...) tests/hyperv_test.go:50 tests/go_default_test_test.init.func7.2.2() tests/hyperv_test.go:54 +0x25
[sig-compute] Infrastructure virt-handler should enable/disable ksm and add/remove annotation on all the nodes when the selector is empty tests/infrastructure/virt-handler.go:95 Should list compute nodeList Unexpected error: <*url.Error | 0xc0053e65a0>: Get "https://127.0.0.1:44083/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue", Err: <*net.OpError | 0xc009fece10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007216c60>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc003315f20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libnode/node.go:300
AfterSuite tests/tests_suite_test.go:107 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0072174a0>: Get "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:44083: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:44083/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00923f860>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005b1c810>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 44083, Zone: "", }, Err: <*os.SyscallError | 0xc008c003a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16528/pull-kubevirt-e2e-k8s-1.35-sig-compute-serial/2024583368829571072
Test Name Failure Message
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] InstancetypeReferencePolicy should result in running VirtualMachine when set to expand tests/instancetype/reference_policy.go:97 Timed out after 304.346s. One of the Kubevirt control-plane components is not ready. The function passed to Eventually failed at tests/testsuite/fixture.go:193 with: Unexpected error: <*rest.wrapPreviousError | 0xc007ee8fe0>: Get "https://127.0.0.1:34977/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt": dial tcp 127.0.0.1:34977: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:52894->127.0.0.1:34977: read: connection reset by peer { currentErr: <*url.Error | 0xc006af77a0>{ Op: "Get", URL: "https://127.0.0.1:34977/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt", Err: <*net.OpError | 0xc0099c7c20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003da7ec0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34977, Zone: "", }, Err: <*os.SyscallError | 0xc007ee8fa0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc009853720>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc003da7a40>{IP: [127, 0, 0, 1], Port: 52894, Zone: ""}, Addr: <*net.TCPAddr | 0xc003da7aa0>{IP: [127, 0, 0, 1], Port: 34977, Zone: ""}, Err: <*os.SyscallError | 0xc003527ac0>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred At one point, however, the function did return successfully. Yet, Eventually failed because the matcher was not satisfied: Expected <[]interface {} | len:4, cap:4>: [ <map[string]interface {} | len:6>{ "lastTransitionTime": <string>"2026-02-19T23:33:34Z", "reason": <string>"DeploymentInProgress", "message": <string>"Deploying version devel with registry registry:5000/kubevirt", "type": <string>"Available", "status": <string>"False", "lastProbeTime": <string>"2026-02-19T23:33:34Z", }, <map[string]interface {} | len:6>{ "lastProbeTime": <string>"2026-02-19T23:33:34Z", "lastTransitionTime": <string>"2026-02-19T23:33:34Z", "reason": <string>"DeploymentInProgress", "message": <string>"Deploying version devel with registry registry:5000/kubevirt", "type": <string>"Progressing", "status": <string>"True", }, <map[string]interface {} | len:6>{ "reason": <string>"DeploymentInProgress", "message": <string>"Deploying version devel with registry registry:5000/kubevirt", "type": <string>"Degraded", "status": <string>"False", "lastProbeTime": <string>"2026-02-19T23:33:34Z", "lastTransitionTime": <string>"2026-02-19T23:33:34Z", }, <map[string]interface {} | len:6>{ "message": <string>"All resources were created.", "type": <string>"Created", "status": <string>"True", "lastProbeTime": <string>"2026-02-19T21:27:06Z", "lastTransitionTime": nil, "reason": <string>"AllResourcesCreated", }, ] to find condition of type 'Available' and status 'True' but got 'False' tests/testsuite/fixture.go:195
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] InstancetypeReferencePolicy should result in running VirtualMachine when set to expandAll tests/instancetype/reference_policy.go:98 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc00817bad0>: Get "https://127.0.0.1:34977/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34977: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34977/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc004bc72c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009091590>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34977, Zone: "", }, Err: <*os.SyscallError | 0xc008434f80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]Dry-Run requests KubeVirt CR [test_id:7648]delete a KubeVirt CR tests/dryrun_test.go:480 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc002bfd3b0>: Get "https://127.0.0.1:34977/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34977: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34977/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00277c7d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003ea0e40>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34977, Zone: "", }, Err: <*os.SyscallError | 0xc006044520>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]HostDevices with ephemeral disk with emulated PCI devices Should successfully passthrough an emulated PCI device tests/vmi_hostdev_test.go:42 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc008a48750>: Get "https://127.0.0.1:34977/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34977: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34977/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0005e5540>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009779200>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34977, Zone: "", }, Err: <*os.SyscallError | 0xc006e4c180>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]HostDevices with ephemeral disk with emulated PCI devices Should successfully passthrough 2 emulated PCI devices tests/vmi_hostdev_test.go:42 Timed out after 11.016s. Unexpected error: <*url.Error | 0xc006f46060>: Get "https://127.0.0.1:34977/apis/kubevirt.io/v1/kubevirts": net/http: TLS handshake timeout { Op: "Get", URL: "https://127.0.0.1:34977/apis/kubevirt.io/v1/kubevirts", Err: <http.tlsHandshakeTimeoutError>{}, } occurred tests/libkubevirt/kubevirt.go:49
AfterSuite tests/tests_suite_test.go:107 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc003c20150>: Get "https://127.0.0.1:34977/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34977: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34977/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0089320f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00328ac00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34977, Zone: "", }, Err: <*os.SyscallError | 0xc008e88040>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16399/pull-kubevirt-e2e-k8s-1.35-sig-compute-serial/2026238526554640384
Test Name Failure Message
[sig-compute] Hyper-V enlightenments VMI with HyperV re-enlightenment enabled the vmi with EVMCS HyperV feature should have correct HyperV and cpu features auto filled EVMCS should be enabled when vmi.Spec.Domain.Features.Hyperv.EVMCS is set but the EVMCS.Enabled field is nil tests/hyperv_test.go:204 Unexpected error: <*errors.StatusError | 0xc0024e3a40>: rpc error: code = Unavailable desc = error reading from server: EOF { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "rpc error: code = Unavailable desc = error reading from server: EOF", Reason: "", Details: nil, Code: 500, }, } occurred tests/testsuite/kubevirtresource.go:201
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/15958/pull-kubevirt-e2e-k8s-1.35-sig-compute-serial/2026778256132280320
Test Name Failure Message
[sig-compute] VMIDefaults MemBalloon defaults Should override period in domain if present in virt-config [test_id:4558]with period 0 tests/compute/vmidefaults.go:164 Timed out after 309.217s. One of the Kubevirt control-plane components is not ready. The function passed to Eventually failed at tests/testsuite/fixture.go:193 with: Unexpected error: <*rest.wrapPreviousError | 0xc00752c1c0>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt": dial tcp 127.0.0.1:34813: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:53060->127.0.0.1:34813: read: connection reset by peer { currentErr: <*url.Error | 0xc0035d56e0>{ Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt", Err: <*net.OpError | 0xc008c84730>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0151035c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc00752c180>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc008c846e0>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc0035d5560>{IP: [127, 0, 0, 1], Port: 53060, Zone: ""}, Addr: <*net.TCPAddr | 0xc0035d5590>{IP: [127, 0, 0, 1], Port: 34813, Zone: ""}, Err: <*os.SyscallError | 0xc00752c120>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred At one point, however, the function did return successfully. Yet, Eventually failed because the matcher was not satisfied: Expected <[]interface {} | len:4, cap:4>: [ <map[string]interface {} | len:6>{ "status": <string>"False", "lastProbeTime": <string>"2026-02-25T23:37:25Z", "lastTransitionTime": <string>"2026-02-25T23:37:25Z", "reason": <string>"DeploymentInProgress", "message": <string>"Deploying version devel with registry registry:5000/kubevirt", "type": <string>"Available", }, <map[string]interface {} | len:6>{ "message": <string>"Deploying version devel with registry registry:5000/kubevirt", "type": <string>"Progressing", "status": <string>"True", "lastProbeTime": <string>"2026-02-25T23:37:25Z", "lastTransitionTime": <string>"2026-02-25T23:37:25Z", "reason": <string>"DeploymentInProgress", }, <map[string]interface {} | len:6>{ "status": <string>"False", "lastProbeTime": <string>"2026-02-25T23:37:25Z", "lastTransitionTime": <string>"2026-02-25T23:37:25Z", "reason": <string>"DeploymentInProgress", "message": <string>"Deploying version devel with registry registry:5000/kubevirt", "type": <string>"Degraded", }, <map[string]interface {} | len:6>{ "lastTransitionTime": nil, "reason": <string>"AllResourcesCreated", "message": <string>"All resources were created.", "type": <string>"Created", "status": <string>"True", "lastProbeTime": <string>"2026-02-25T22:18:21Z", }, ] to find condition of type 'Available' and status 'True' but got 'False' tests/testsuite/fixture.go:195
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4136] should find one leading virt-controller and two ready tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc006dda270>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc008423e00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007fe56b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc007f0e3a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4137]should find one leading virt-operator and two ready tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc015103ce0>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc008c85a90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001f50390>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc00a2ae9c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4138]should be exposed and registered on the metrics endpoint tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc003b09e30>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0093d14f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005931230>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc003ba7020>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4139]should return Prometheus metrics tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0036b51a0>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0027c8960>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003c37170>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc008880e20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should throttle the Prometheus metrics access [test_id:4140] by using IPv4 tests/infrastructure/prometheus.go:213 Timed out after 15.017s. Unexpected error: <*url.Error | 0xc005a0db90>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": net/http: TLS handshake timeout { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <http.tlsHandshakeTimeoutError>{}, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should throttle the Prometheus metrics access [test_id:6226] by using IPv6 tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc004a77e30>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc003459180>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0074837a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc00812b1a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4141]should include the metrics for a running VM tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc009b25b90>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00230b900>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00445fe30>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc008e1e560>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should expose kubevirt_node_deprecated_machine_types metric tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc004af53e0>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0007bf400>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005ce7590>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc0049eaca0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] storage flush requests metric tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc005478a20>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00995d220>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004c82de0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc00119c8c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] time spent on cache flushing metric tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0038682d0>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00266a1e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008f9ef00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc001e2bd60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] I/O read operations metric tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0046fc600>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0028da3c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0051ce450>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc0084044c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] I/O write operations metric tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0051cec90>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00321d360>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008f9ede0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc008c5e200>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] storage read operation time metric tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc008f9fef0>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0032c5950>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0056147e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc003b0f560>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] storage read traffic in bytes metric tests/infrastructure/prometheus.go:213 Timed out after 10.010s. Unexpected error: <*url.Error | 0xc009951110>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": net/http: TLS handshake timeout { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <http.tlsHandshakeTimeoutError>{}, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] storage write operation time metric tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0035d4660>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0007dae10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009b24c00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc00220be40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] storage write traffic in bytes metric tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0063e6810>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc009a698b0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0151024e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc0090ea780>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include metrics for a running VM [test_id:4143] network metrics tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0042f8630>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0069f7b30>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0058ea810>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc0024d9160>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include metrics for a running VM [test_id:4144] memory metrics tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0044f7bf0>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00836eeb0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008278810>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc00a2aea40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include metrics for a running VM [test_id:4553] vcpu wait tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0093ff920>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0088d7770>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008f9e210>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc00708a760>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include metrics for a running VM [test_id:4554] vcpu seconds tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc00716bb90>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc006059a90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007fe4f00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc003b0f440>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include metrics for a running VM [test_id:4556] vmi unused memory tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc005986d50>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0065dbea0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000d79530>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc007a7c9c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4146]should include VMI phase metrics for all running VMs tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc004e46210>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc000568690>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002af1e00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc008063e60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints VMI eviction blocker status should include VMI eviction blocker status for all running VMs [test_id:4148] by IPv4 tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc003c81380>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00a032aa0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0063e7560>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc0090ebbc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints VMI eviction blocker status should include VMI eviction blocker status for all running VMs [test_id:6243] by IPv6 tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc005ce6b70>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc009008d20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001d2e960>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc008ae2280>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4147]should include kubernetes labels to VMI metrics tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0056d3410>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00836f680>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0093fe450>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc00a2af4c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4555]should include swap metrics tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc008f9eed0>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0005a3cc0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00716b050>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc009aa0f40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]SecurityFeatures Check virt-launcher securityContext With selinuxLauncherType as container_t [test_id:2953][test_id:2895]Ensure virt-launcher pod securityContext type is correctly set and not privileged tests/security_features_test.go:65 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc001d038c0>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00230a2d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007fe5710>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc008c5f9c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]SecurityFeatures Check virt-launcher securityContext With selinuxLauncherType as container_t [test_id:4297]Make sure qemu processes are MCS constrained tests/security_features_test.go:65 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc007fe5740>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc008741720>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0018f7770>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc007676520>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]SecurityFeatures Check virt-launcher securityContext With selinuxLauncherType defined as spc_t [test_id:3787]Should honor custom SELinux type for virt-launcher tests/security_features_test.go:65 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc00401a9f0>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0007db7c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc007291020>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc00812b440>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations simple default clone tests/clone_test.go:56 Timed out after 10.822s. Unexpected error: <*rest.wrapPreviousError | 0xc007f0e860>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:33674->127.0.0.1:34813: read: connection reset by peer { currentErr: <*url.Error | 0xc005ab2e10>{ Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc008c84e10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0090a0210>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc007f0e820>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc005ca61e0>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc000b424e0>{IP: [127, 0, 0, 1], Port: 33674, Zone: ""}, Addr: <*net.TCPAddr | 0xc000b42540>{IP: [127, 0, 0, 1], Port: 34813, Zone: ""}, Err: <*os.SyscallError | 0xc00ad26040>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations simple clone with snapshot source, create clone before snapshot tests/clone_test.go:56 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc001d2fbf0>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc008df8a50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0090a1200>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc0040ab9a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations clone with only some of labels/annotations tests/clone_test.go:56 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc00507d140>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0093d1180>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002e60450>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc00752c060>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations clone with only some of template.labels/template.annotations tests/clone_test.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc008220840>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0005a3e50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00716aba0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc0034f6d20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations clone with changed MAC address tests/clone_test.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc007483b00>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc008e18410>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0056149c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc008c5e420>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations regarding domain Firmware clone with changed SMBios serial tests/clone_test.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc008508810>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0065dba90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0085087e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc00920f340>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
VirtualMachineClone Tests VM clone [sig-compute]simple VM and cloning operations regarding domain Firmware should strip firmware UUID tests/clone_test.go:56 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc002af1890>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0065dbd60>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005ab2e10>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc00920f740>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure changes to the kubernetes client on the controller rate limiter should lead to delayed VMI starts tests/infrastructure/k8s-client-changes.go:74 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc015102120>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc001fa3220>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005ab3710>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc008062cc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure changes to the kubernetes client on the virt handler rate limiter should lead to delayed VMI running states tests/infrastructure/k8s-client-changes.go:105 Should list compute nodeList Unexpected error: <*url.Error | 0xc0090a0c60>: Get "https://127.0.0.1:34813/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue", Err: <*net.OpError | 0xc008df80a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0008db1d0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc0040ab1a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libnode/node.go:300
[sig-network] [crit:high][vendor:cnv-qe@redhat.com][level:component] [crit:high][vendor:cnv-qe@redhat.com][level:component]Creating a VirtualMachineInstance when virt-handler is responsive VMIs shouldn't fail after the kubelet restarts [sig-compute]with default networking tests/network/vmi_lifecycle.go:109 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc006dda9f0>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc009009680>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0045d9d10>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc00752c720>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[ref_id:2717][sig-compute]KubeVirt control plane resilience pod eviction evicting pods of control plane [test_id:2830]last eviction should fail for multi-replica virt-controller pods tests/virt_control_plane_test.go:135 Should list compute nodeList Unexpected error: <*url.Error | 0xc007482240>: Get "https://127.0.0.1:34813/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue", Err: <*net.OpError | 0xc005984230>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00716b710>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc008ae3260>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libnode/node.go:300
[ref_id:2717][sig-compute]KubeVirt control plane resilience pod eviction evicting pods of control plane [test_id:2799]last eviction should fail for multi-replica virt-api pods tests/virt_control_plane_test.go:135 Should list compute nodeList Unexpected error: <*url.Error | 0xc0056159b0>: Get "https://127.0.0.1:34813/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue", Err: <*net.OpError | 0xc005dfa690>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005be95f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc008e1ed40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libnode/node.go:300
[ref_id:2717][sig-compute]KubeVirt control plane resilience control plane components check when control plane pods are running [test_id:2806]virt-controller and virt-api pods have a pod disruption budget tests/virt_control_plane_test.go:180 Unexpected error: <*url.Error | 0xc00716e450>: Get "https://127.0.0.1:34813/apis/apps/v1/namespaces/kubevirt/deployments": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/apps/v1/namespaces/kubevirt/deployments", Err: <*net.OpError | 0xc0005a3c70>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc006e83650>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc008e1e9a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/virt_control_plane_test.go:184
[ref_id:2717][sig-compute]KubeVirt control plane resilience control plane components check when Control plane pods temporarily lose connection to Kubernetes API should fail health checks when connectivity is lost, and recover when connectivity is regained tests/virt_control_plane_test.go:240 Unexpected error: <*url.Error | 0xc006e83680>: Get "https://127.0.0.1:34813/apis/apps/v1/namespaces/kubevirt/daemonsets/virt-handler": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/apps/v1/namespaces/kubevirt/daemonsets/virt-handler", Err: <*net.OpError | 0xc005984730>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001d2fd40>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc009aa1580>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/virt_control_plane_test.go:241
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] Instancetype and Preferences with cluster memory overcommit being applied should apply memory overcommit instancetype to VMI even with cluster overcommit set tests/instancetype/instancetype.go:197 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc002788000>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0093d0000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008f9e270>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc007f0e000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[rfe_id:588][crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute]ContainerDisk [rfe_id:273][crit:medium][vendor:cnv-qe@redhat.com][level:component]Starting a VirtualMachineInstance should obey the disk verification limits in the KubeVirt CR [test_id:7182]disk verification should fail when the memory limit is too low tests/container_disk_test.go:102 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc004e47f20>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc006792280>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0035d59b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc007f0f3a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[rfe_id:588][crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute]ContainerDisk Simulate an upgrade from a version where ImageVolume was disabled to a version where it is enabled Migration from a source launcher with the bind mount workaround to a target launcher without the bind mount workaround should succeed when using simple Alpine vmi tests/container_disk_test.go:225 Unexpected error: <*url.Error | 0xc0035d59e0>: Get "https://127.0.0.1:34813/version": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/version", Err: <*net.OpError | 0xc0065da410>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00716e570>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc009293be0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/container_disk_test.go:227
[sig-compute] Infrastructure Start a VirtualMachineInstance when the controller pod is not running and an election happens [test_id:4642]should elect a new controller pod tests/infrastructure/virt-controller-leader-election.go:41 Unexpected error: <*url.Error | 0xc0048f2ba0>: Get "https://127.0.0.1:34813/apis/coordination.k8s.io/v1/namespaces/kubevirt/leases/virt-controller": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/coordination.k8s.io/v1/namespaces/kubevirt/leases/virt-controller", Err: <*net.OpError | 0xc005984f50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0016a6840>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc009aa0620>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libinfra/leader.go:38
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates [test_id:4099] should be rotated when a new CA is created tests/infrastructure/certificates.go:69 Unexpected error: <*url.Error | 0xc00445ed80>: Get "https://127.0.0.1:34813/api/v1/namespaces/kubevirt/configmaps/kubevirt-ca": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/api/v1/namespaces/kubevirt/configmaps/kubevirt-ca", Err: <*net.OpError | 0xc00995c230>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc015102300>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc008c49840>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libinfra/certificates.go:59
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates [sig-compute][test_id:4100] should be valid during the whole rotation process tests/infrastructure/certificates.go:136 Unexpected error: <*url.Error | 0xc0045d8f00>: Get "https://127.0.0.1:34813/api/v1/namespaces/kubevirt/pods?labelSelector=kubevirt.io%3Dvirt-api": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/api/v1/namespaces/kubevirt/pods?labelSelector=kubevirt.io%3Dvirt-api", Err: <*net.OpError | 0xc008423770>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0059863c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc002a2e5c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libpod/certs.go:51
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4101] virt-operator tests/infrastructure/certificates.go:188 Unexpected error: <*url.Error | 0xc015102330>: Patch "https://127.0.0.1:34813/api/v1/namespaces/kubevirt/secrets/kubevirt-operator-certs": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:34813/api/v1/namespaces/kubevirt/secrets/kubevirt-operator-certs", Err: <*net.OpError | 0xc0088d73b0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009aa9e00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc007a7ca20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4103] virt-api tests/infrastructure/certificates.go:189 Unexpected error: <*url.Error | 0xc0040e20c0>: Patch "https://127.0.0.1:34813/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-api-certs": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:34813/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-api-certs", Err: <*net.OpError | 0xc0069f6f00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008fb0bd0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc002412e00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4104] virt-controller tests/infrastructure/certificates.go:190 Unexpected error: <*url.Error | 0xc008fb0c00>: Patch "https://127.0.0.1:34813/api/v1/namespaces/kubevirt/secrets/kubevirt-controller-certs": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:34813/api/v1/namespaces/kubevirt/secrets/kubevirt-controller-certs", Err: <*net.OpError | 0xc002310c80>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009a60810>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc008c5e3e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4105] virt-handlers client side tests/infrastructure/certificates.go:191 Unexpected error: <*url.Error | 0xc009a60840>: Patch "https://127.0.0.1:34813/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-certs": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:34813/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-certs", Err: <*net.OpError | 0xc002572c30>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004ce6cc0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc0090eb520>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[sig-compute] Infrastructure [rfe_id:4102][crit:medium][vendor:cnv-qe@redhat.com][level:component]certificates should be rotated when deleted for [test_id:4106] virt-handlers server side tests/infrastructure/certificates.go:192 Unexpected error: <*url.Error | 0xc003075170>: Patch "https://127.0.0.1:34813/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-server-certs": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Patch", URL: "https://127.0.0.1:34813/api/v1/namespaces/kubevirt/secrets/kubevirt-virt-handler-server-certs", Err: <*net.OpError | 0xc0021713b0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0054798f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc0091a6180>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/infrastructure/certificates.go:181
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] InstancetypeReferencePolicy should result in running VirtualMachine when set to reference tests/instancetype/reference_policy.go:96 Timed out after 15.018s. Unexpected error: <*url.Error | 0xc004ce6000>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": net/http: TLS handshake timeout { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <http.tlsHandshakeTimeoutError>{}, } occurred tests/libkubevirt/kubevirt.go:49
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] InstancetypeReferencePolicy should result in running VirtualMachine when set to expand tests/instancetype/reference_policy.go:97 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc005be94a0>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc009008460>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00332ac00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc0024d9600>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] InstancetypeReferencePolicy should result in running VirtualMachine when set to expandAll tests/instancetype/reference_policy.go:98 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0018a28d0>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc006792dc0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0035d5260>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc007f0f300>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
AfterSuite tests/tests_suite_test.go:107 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0035d52c0>: Get "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:34813: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:34813/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00836f540>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008278270>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 34813, Zone: "", }, Err: <*os.SyscallError | 0xc003f572c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-migrations
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16923/pull-kubevirt-e2e-k8s-1.35-sig-compute-migrations/2026268759437611008
Test Name Failure Message
[rfe_id:393][crit:high][vendor:cnv-qe@redhat.com][level:system][sig-compute] VM Live Migration Starting a VirtualMachineInstance with a Alpine disk [test_id:1783] should be successfully migrated multiple times with cloud-init disk tests/migration/migration.go:547 migration should not fail Expected <v1.VirtualMachineInstanceMigrationPhase>: Failed not to equal <v1.VirtualMachineInstanceMigrationPhase>: Failed vendor/github.com/onsi/gomega/internal/async_assertion.go:337
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-migrations
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16883/pull-kubevirt-e2e-k8s-1.35-sig-compute-migrations/2026262639117602816
Test Name Failure Message
[rfe_id:393][crit:high][vendor:cnv-qe@redhat.com][level:system][sig-compute] Live Migration with a live-migrate eviction strategy set [ref_id:2293] with a VMI running with an eviction strategy set with node tainted during node drain [test_id:2224] should handle mixture of VMs with different eviction strategies. tests/migration/eviction_strategy.go:317 node/node03 cordoned evicting pod kubevirt-test-default1/virt-launcher-testvmi-qmvb5-jftht evicting pod kubevirt-test-default1/virt-launcher-testvmi-k7pqp-dl49d evicting pod kubevirt-test-default1/virt-launcher-testvmi-m6krb-l5csb pod/virt-launcher-testvmi-m6krb-l5csb evicted evicting pod kubevirt-test-default1/virt-launcher-testvmi-k7pqp-dl49d evicting pod kubevirt-test-default1/virt-launcher-testvmi-qmvb5-jftht evicting pod kubevirt-test-default1/virt-launcher-testvmi-k7pqp-dl49d evicting pod kubevirt-test-default1/virt-launcher-testvmi-qmvb5-jftht evicting pod kubevirt-test-default1/virt-launcher-testvmi-k7pqp-dl49d evicting pod kubevirt-test-default1/virt-launcher-testvmi-qmvb5-jftht evicting pod kubevirt-test-default1/virt-launcher-testvmi-k7pqp-dl49d evicting pod kubevirt-test-default1/virt-launcher-testvmi-qmvb5-jftht evicting pod kubevirt-test-default1/virt-launcher-testvmi-k7pqp-dl49d evicting pod kubevirt-test-default1/virt-launcher-testvmi-qmvb5-jftht %!(EXTRA string=error when evicting pods/"virt-launcher-testvmi-k7pqp-dl49d" -n "kubevirt-test-default1" (will retry after 5s): admission webhook "virt-launcher-eviction-interceptor.kubevirt.io" denied the request: Eviction triggered evacuation of VMI "kubevirt-test-default1/testvmi-k7pqp" error when evicting pods/"virt-launcher-testvmi-qmvb5-jftht" -n "kubevirt-test-default1" (will retry after 5s): admission webhook "virt-launcher-eviction-interceptor.kubevirt.io" denied the request: Eviction triggered evacuation of VMI "kubevirt-test-default1/testvmi-qmvb5" error when evicting pods/"virt-launcher-testvmi-k7pqp-dl49d" -n "kubevirt-test-default1" (will retry after 5s): admission webhook "virt-launcher-eviction-interceptor.kubevirt.io" denied the request: Evacuation in progress: Eviction triggered evacuation of VMI "kubevirt-test-default1/testvmi-k7pqp" error when evicting pods/"virt-launcher-testvmi-qmvb5-jftht" -n "kubevirt-test-default1" (will retry after 5s): admission webhook "virt-launcher-eviction-interceptor.kubevirt.io" denied the request: Evacuation in progress: Eviction triggered evacuation of VMI "kubevirt-test-default1/testvmi-qmvb5" error when evicting pods/"virt-launcher-testvmi-k7pqp-dl49d" -n "kubevirt-test-default1" (will retry after 5s): admission webhook "virt-launcher-eviction-interceptor.kubevirt.io" denied the request: Evacuation in progress: Eviction triggered evacuation of VMI "kubevirt-test-default1/testvmi-k7pqp" error when evicting pods/"virt-launcher-testvmi-qmvb5-jftht" -n "kubevirt-test-default1" (will retry after 5s): admission webhook "virt-launcher-eviction-interceptor.kubevirt.io" denied the request: Evacuation in progress: Eviction triggered evacuation of VMI "kubevirt-test-default1/testvmi-qmvb5" error when evicting pods/"virt-launcher-testvmi-k7pqp-dl49d" -n "kubevirt-test-default1" (will retry after 5s): admission webhook "virt-launcher-eviction-interceptor.kubevirt.io" denied the request: Evacuation in progress: Eviction triggered evacuation of VMI "kubevirt-test-default1/testvmi-k7pqp" error when evicting pods/"virt-launcher-testvmi-qmvb5-jftht" -n "kubevirt-test-default1" (will retry after 5s): admission webhook "virt-launcher-eviction-interceptor.kubevirt.io" denied the request: Evacuation in progress: Eviction triggered evacuation of VMI "kubevirt-test-default1/testvmi-qmvb5" error when evicting pods/"virt-launcher-testvmi-k7pqp-dl49d" -n "kubevirt-test-default1" (will retry after 5s): admission webhook "virt-launcher-eviction-interceptor.kubevirt.io" denied the request: Eviction request for target Pod error when evicting pods/"virt-launcher-testvmi-qmvb5-jftht" -n "kubevirt-test-default1" (will retry after 5s): admission webhook "virt-launcher-eviction-interceptor.kubevirt.io" denied the request: Eviction request for target Pod There are pending pods in node "node03" when an error occurred: [error when evicting pods/"virt-launcher-testvmi-qmvb5-jftht" -n "kubevirt-test-default1": rpc error: code = Unavailable desc = error reading from server: read tcp 127.0.0.1:54324->127.0.0.1:2379: read: connection reset by peer, error when evicting pods/"virt-launcher-testvmi-k7pqp-dl49d" -n "kubevirt-test-default1": rpc error: code = Unavailable desc = error reading from server: read tcp 127.0.0.1:54324->127.0.0.1:2379: read: connection reset by peer] pod/virt-launcher-testvmi-k7pqp-dl49d pod/virt-launcher-testvmi-qmvb5-jftht error: unable to drain node "node03" due to error: [error when evicting pods/"virt-launcher-testvmi-qmvb5-jftht" -n "kubevirt-test-default1": rpc error: code = Unavailable desc = error reading from server: read tcp 127.0.0.1:54324->127.0.0.1:2379: read: connection reset by peer, error when evicting pods/"virt-launcher-testvmi-k7pqp-dl49d" -n "kubevirt-test-default1": rpc error: code = Unavailable desc = error reading from server: read tcp 127.0.0.1:54324->127.0.0.1:2379: read: connection reset by peer], continuing command... There are pending nodes to be drained: node03 error when evicting pods/"virt-launcher-testvmi-qmvb5-jftht" -n "kubevirt-test-default1": rpc error: code = Unavailable desc = error reading from server: read tcp 127.0.0.1:54324->127.0.0.1:2379: read: connection reset by peer error when evicting pods/"virt-launcher-testvmi-k7pqp-dl49d" -n "kubevirt-test-default1": rpc error: code = Unavailable desc = error reading from server: read tcp 127.0.0.1:54324->127.0.0.1:2379: read: connection reset by peer ) Unexpected error: <*errors.errorString | 0xc005e663c0>: command failed: cannot run command "/home/prow/go/src/github.com/kubevirt/kubevirt/kubevirtci/_ci-configs/k8s-1.35/.kubectl drain node03 --delete-emptydir-data --pod-selector kubevirt.io=virt-launcher --ignore-daemonsets=true --force --timeout=180s": exit status 1 { s: "command failed: cannot run command \"/home/prow/go/src/github.com/kubevirt/kubevirt/kubevirtci/_ci-configs/k8s-1.35/.kubectl drain node03 --delete-emptydir-data --pod-selector kubevirt.io=virt-launcher --ignore-daemonsets=true --force --timeout=180s\": exit status 1", } occurred tests/libnode/node.go:136
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-migrations
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16812/pull-kubevirt-e2e-k8s-1.35-sig-compute-migrations/2026565321984315392
Test Name Failure Message
[rfe_id:393][crit:high][vendor:cnv-qe@redhat.com][level:system][sig-compute] VM Live Migration Starting a VirtualMachineInstance migration monitor Migration should generate empty isos of the right size on the target tests/migration/migration.go:1703 Timed out after 120.001s. VirtualMachineInstanceMigration/test-migration-pctbb expected phase is 'PreparingTarget' but got 'Failed' tests/migration/migration.go:1744
compute pull-kubevirt-e2e-k8s-1.35-sig-compute-migrations
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16684/pull-kubevirt-e2e-k8s-1.35-sig-compute-migrations/2025865921406439424
Test Name Failure Message
[rfe_id:393][crit:high][vendor:cnv-qe@redhat.com][level:system][sig-compute] VM Live Migration [test_id:8482] Migration Metrics exposed to prometheus during VM migration tests/migration/migration.go:2257 Expected success, but got an error: <expect.TimeoutError>: expect: timer expired after 120 seconds 120000000000 tests/migration/migration.go:2267
compute pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16883/pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64/2026269417033175040
Test Name Failure Message
[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]Creating a VirtualMachineInstance with boot order [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]should be able to boot from selected disk [test_id:1628]Cirros as first boot tests/vmi_lifecycle_test.go:321 Timed out after 120.001s. Timed out waiting for VMI testvmi-74l85 to enter [Running] phase(s) Expected <v1.VirtualMachineInstancePhase>: Scheduling to be an element of <[]v1.VirtualMachineInstancePhase | len:1, cap:1>: ["Running"] tests/libwait/wait.go:77
compute pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16877/pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64/2026578520875995136
Test Name Failure Message
[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]Delete a VirtualMachineInstance with grace period greater than 0 [test_id:1655]should run graceful shutdown tests/vmi_lifecycle_test.go:1629 Timed out after 15.001s. expected object to be gone, but it still exists: *v1.VirtualMachineInstance metadata: <v1.ObjectMeta>: { Name: "testvmi-rr7ng", GenerateName: "", Namespace: "kubevirt-test-default1", SelfLink: "", UID: "44261bcd-97df-45d2-ab1a-d2c823881cd5", ResourceVersion: "18601", Generation: 9, CreationTimestamp: { Time: 2026-02-25T09:47:07Z, }, DeletionTimestamp: { Time: 2026-02-25T09:47:32Z, }, DeletionGracePeriodSeconds: 0, Labels: { "kubevirt.io/nodeName": "kind-1.34-worker", }, Annotations: { "kubevirt.io/created-by-test": "[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]Delete a VirtualMachineInstance with grace period greater than 0 [test_id:1655]should run graceful shutdown", "kubevirt.io/latest-observed-api-version": "v1", "kubevirt.io/storage-observed-api-version": "v1", }, OwnerReferences: nil, Finalizers: [ "kubevirt.io/foregroundDeleteVirtualMachine", ], ManagedFields: nil, } status: <v1.VirtualMachineInstanceStatus>: { NodeName: "kind-1.34-worker", Reason: "", Conditions: [ { Type: "Ready", Status: "False", LastProbeTime: { Time: 2026-02-25T09:47:32Z, }, LastTransitionTime: { Time: 2026-02-25T09:47:32Z, }, Reason: "PodTerminating", Message: "virt-launcher pod is terminating", }, { Type: "LiveMigratable", Status: "False", LastProbeTime: { Time: 0001-01-01T00:00:00Z, }, LastTransitionTime: { Time: 0001-01-01T00:00:00Z, }, Reason: "InterfaceNotLiveMigratable", Message: "cannot migrate VMI which does not use masquerade or a migratable plugin to connect to the pod network", }, { Type: "StorageLiveMigratable", Status: "False", LastProbeTime: { Time: 0001-01-01T00:00:00Z, }, LastTransitionTime: { Time: 0001-01-01T00:00:00Z, }, Reason: "NotMigratable", Message: "InterfaceNotLiveMigratable: cannot migrate VMI which does not use masquerade or a migratable plugin to connect to the pod network", }, ], Phase: "Running", PhaseTransitionTimestamps: [ { Phase: "Pending", PhaseTransitionTimestamp: { Time: 2026-02-25T09:47:07Z, }, }, { Phase: "Scheduling", PhaseTransitionTimestamp: { Time: 2026-02-25T09:47:07Z, }, }, { Phase: "Scheduled", PhaseTransitionTimestamp: { Time: 2026-02-25T09:47:28Z, }, }, { Phase: "Running", PhaseTransitionTimestamp: { Time: 2026-02-25T09:47:31Z, }, }, ], Interfaces: [ { IP: "10.244.1.54", MAC: "72:f6:d4:d2:79:20", Name: "default", IPs: ["10.244.1.54"], PodInterfaceName: "eth0", InterfaceName: "", InfoSource: "domain", QueueCount: 1, LinkState: "up", }, ], GuestOSInfo: {Name: "", KernelRelease: "", Version: "", PrettyName: "", VersionID: "", KernelVersion: "", Machine: "", ID: ""}, MigrationState: nil, MigrationMethod: "BlockMigration", MigrationTransport: "Unix", QOSClass: "Burstable", LauncherContainerImageVersion: "registry:5000/kubevirt/virt-launcher:devel", EvacuationNodeName: "", ActivePods: { "eb8fd8b7-f74f-4194-b448-8ade5c193a02": "kind-1.34-worker", }, VolumeStatus: [ { Name: "disk0", Target: "vda", Phase: "", Reason: "", Message: "", PersistentVolumeClaimInfo: nil, HotplugVolume: nil, Size: 0, MemoryDumpVolume: nil, ContainerDiskVolume: {Checksum: 1068092945}, }, ], KernelBootStatus: nil, FSFreezeStatus: "", TopologyHints: nil, VirtualMachineRevisionName: "", RuntimeUser: 107, VSOCKCID: nil, SelinuxContext: "none", Machine: { Type: "virt-rhel9.8.0", }, CurrentCPUTopology: {Cores: 1, Sockets: 1, Threads: 1}, Memory: { GuestAtBoot: { i: {value: 268435456, scale: 0}, d: {Dec: nil}, s: "", Format: "BinarySI", }, GuestCurrent: { i: {value: 268435456, scale: 0}, d: {Dec: nil}, s: "", Format: "BinarySI", }, GuestRequested: { i: {value: 268435456, scale: 0}, d: {Dec: nil}, s: "", Format: "BinarySI", }, }, MigratedVolumes: nil, DeviceStatus: nil,... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output tests/vmi_lifecycle_test.go:1655
compute pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16877/pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64/2026348635632963584
Test Name Failure Message
[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]Creating a VirtualMachineInstance [test_id:1622]should log libvirtd logs tests/vmi_lifecycle_test.go:178 Timed out after 11.001s. Expected <string>: {"component":"virt-launcher","level":"info","msg":"Collected all requested hook sidecar sockets","pos":"manager.go:88","timestamp":"2026-02-24T17:52:15.253412Z"} {"component":"virt-launcher","level":"info","msg":"Sorted all collected sidecar sockets per hook point based on their priority and name: map[]","pos":"manager.go:91","timestamp":"2026-02-24T17:52:15.253453Z"} {"component":"virt-launcher","level":"info","msg":"Connecting to libvirt daemon: qemu+unix:///session?socket=/var/run/libvirt/virtqemud-sock","pos":"libvirt.go:682","timestamp":"2026-02-24T17:52:15.253777Z"} {"component":"virt-launcher","level":"info","msg":"Connected to libvirt daemon","pos":"libvirt.go:697","timestamp":"2026-02-24T17:52:15.755836Z"} {"component":"virt-launcher","level":"info","msg":"Registered libvirt event notify callback","pos":"client.go:631","timestamp":"2026-02-24T17:52:15.758299Z"} {"component":"virt-launcher","level":"info","msg":"Marked as ready","pos":"virt-launcher.go:82","timestamp":"2026-02-24T17:52:15.758482Z"} {"component":"virt-launcher","kind":"","level":"info","msg":"Executing PreStartHook on VMI pod environment","name":"testvmi-89mqv","namespace":"kubevirt-test-default1","pos":"manager.go:735","timestamp":"2026-02-24T17:52:22.211438Z","uid":"5ff349d5-4f9b-48fb-a4ea-246ac18cea72"} {"component":"virt-launcher","kind":"","level":"info","msg":"Starting PreCloudInitIso hook","name":"testvmi-89mqv","namespace":"kubevirt-test-default1","pos":"manager.go:744","timestamp":"2026-02-24T17:52:22.211485Z","uid":"5ff349d5-4f9b-48fb-a4ea-246ac18cea72"} {"component":"virt-launcher","level":"info","msg":"Found IPv4 nameservers in /etc/resolv.conf: 10.64.0.10","pos":"resolveconf.go:185","timestamp":"2026-02-24T17:52:22.212107Z"} {"component":"virt-launcher","level":"info","msg":"Found IPv6 nameservers in /etc/resolv.conf: ","pos":"resolveconf.go:186","timestamp":"2026-02-24T17:52:22.212142Z"} {"component":"virt-launcher","level":"info","msg":"Found search domains in /etc/resolv.conf: kubevirt-test-default1.svc.cluster.local svc.cluster.local cluster.local dns.podman kubevirt-prow-jobs.svc.cluster.local eu-west-1.compute.internal","pos":"resolveconf.go:187","timestamp":"2026-02-24T17:52:22.212152Z"} {"component":"virt-launcher","level":"info","msg":"Starting SingleClientDHCPServer","pos":"server.go:65","timestamp":"2026-02-24T17:52:22.212191Z"} {"component":"virt-launcher","level":"info","msg":"Driver cache mode for /var/run/kubevirt-ephemeral-disks/disk-data/disk0/disk.qcow2 set to none","pos":"converter.go:486","timestamp":"2026-02-24T17:52:22.225983Z"} {"component":"virt-launcher","kind":"","level":"info","msg":"Allocating 3 hotplug ports","name":"testvmi-89mqv","namespace":"kubevirt-test-default1","pos":"manager.go:1412","timestamp":"2026-02-24T17:52:22.230260Z","uid":"5ff349d5-4f9b-48fb-a4ea-246ac18cea72"} {"component":"virt-launcher","kind":"","level":"info","msg":"Domain XML generated. Base64 dump PGRvbWFpbiB0eXBlPSJrdm0iIHhtbG5zOnFlbXU9Imh0dHA6Ly9saWJ2aXJ0Lm9yZy9zY2hlbWFzL2RvbWFpbi9xZW11LzEuMCI+Cgk8bmFtZT5rdWJldmlydC10ZXN0LWRlZmF1bHQxX3Rlc3R2bWktODltcXY8L25hbWU+Cgk8bWVtb3J5IHVuaXQ9ImIiPjI2ODQzNTQ1NjwvbWVtb3J5PgoJPG9zPgoJCTx0eXBlIGFyY2g9ImFhcmNoNjQiIG1hY2hpbmU9InZpcnQiPmh2bTwvdHlwZT4KCQk8bG9hZGVyIHJlYWRvbmx5PSJ5ZXMiIHNlY3VyZT0ibm8iIHR5cGU9InBmbGFzaCI+L3Vzci9zaGFyZS9BQVZNRi9BQVZNRl9DT0RFLmZkPC9sb2FkZXI+CgkJPG52cmFtIHRlbXBsYXRlPSIvdXNyL3NoYXJlL0FBVk1GL0FBVk1GX1ZBUlMuZmQiPi92YXIvcnVuL2t1YmV2aXJ0LXByaXZhdGUvbGlidmlydC9xZW11L252cmFtL3Rlc3R2bWktODltcXZfVkFSUy5mZDwvbnZyYW0+Cgk8L29zPgoJPHN5c2luZm8gdHlwZT0ic21iaW9zIj4KCQk8c3lzdGVtPgoJCQk8ZW50cnkgbmFtZT0idXVpZCI+ZDBlOTU0YmMtM2Y4ZC00ODYwLWIxMDItNjJjOGMwZGE1NmZhPC9lbnRyeT4KCQkJPGVudHJ5IG5hbWU9Im1hbnVmYWN0dXJlciI+S3ViZVZpcnQ8L2VudHJ5PgoJCQk8ZW50cnkgbmFtZT0iZmFtaWx5Ij5LdWJlVmlydDwvZW50cnk+CgkJCTxlbnRyeSBuYW1lPSJwcm9kdWN0Ij5Ob25lPC9lbnRyeT4KCQkJPGVudHJ5IG5hbWU9InNrdSI+PC9lbnRyeT4KCQkJPGVudHJ5IG5hbWU9InZlcnNpb24iPjwvZW50cnk+CgkJPC9zeXN0ZW0+CgkJPGJpb... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output to contain substring <string>: libvirt version: tests/vmi_lifecycle_test.go:186
compute pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16796/pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64/2021939095588048896
Test Name Failure Message
[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]Delete a VirtualMachineInstance with an active pod. [test_id:1651]should result in pod being terminated tests/vmi_lifecycle_test.go:1577 Timed out after 60.000s. Timed out waiting for VMI testvmi-w67g8 to enter [Running] phase(s) Expected <v1.VirtualMachineInstancePhase>: Scheduling to be an element of <[]v1.VirtualMachineInstancePhase | len:1, cap:1>: ["Running"] tests/libwait/wait.go:77
compute pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16786/pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64/2026355723834757120
Test Name Failure Message
[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]Delete a VirtualMachineInstance with grace period greater than 0 [test_id:1655]should run graceful shutdown tests/vmi_lifecycle_test.go:1629 Timed out after 15.000s. expected object to be gone, but it still exists: *v1.VirtualMachineInstance metadata: <v1.ObjectMeta>: { Name: "testvmi-h24xz", GenerateName: "", Namespace: "kubevirt-test-default1", SelfLink: "", UID: "14cc5f7b-ed7e-44da-a294-5c0175e2b14d", ResourceVersion: "22242", Generation: 10, CreationTimestamp: { Time: 2026-02-24T19:16:41Z, }, DeletionTimestamp: { Time: 2026-02-24T19:17:02Z, }, DeletionGracePeriodSeconds: 0, Labels: { "kubevirt.io/nodeName": "kind-1.34-worker", }, Annotations: { "kubevirt.io/created-by-test": "[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]Delete a VirtualMachineInstance with grace period greater than 0 [test_id:1655]should run graceful shutdown", "kubevirt.io/latest-observed-api-version": "v1", "kubevirt.io/storage-observed-api-version": "v1", }, OwnerReferences: nil, Finalizers: [ "kubevirt.io/foregroundDeleteVirtualMachine", ], ManagedFields: nil, } status: <v1.VirtualMachineInstanceStatus>: { NodeName: "kind-1.34-worker", Reason: "", Conditions: [ { Type: "Ready", Status: "False", LastProbeTime: { Time: 2026-02-24T19:17:02Z, }, LastTransitionTime: { Time: 2026-02-24T19:17:02Z, }, Reason: "PodTerminating", Message: "virt-launcher pod is terminating", }, { Type: "LiveMigratable", Status: "False", LastProbeTime: { Time: 0001-01-01T00:00:00Z, }, LastTransitionTime: { Time: 0001-01-01T00:00:00Z, }, Reason: "InterfaceNotLiveMigratable", Message: "cannot migrate VMI which does not use masquerade or a migratable plugin to connect to the pod network", }, { Type: "StorageLiveMigratable", Status: "False", LastProbeTime: { Time: 0001-01-01T00:00:00Z, }, LastTransitionTime: { Time: 0001-01-01T00:00:00Z, }, Reason: "NotMigratable", Message: "InterfaceNotLiveMigratable: cannot migrate VMI which does not use masquerade or a migratable plugin to connect to the pod network", }, ], Phase: "Failed", PhaseTransitionTimestamps: [ { Phase: "Pending", PhaseTransitionTimestamp: { Time: 2026-02-24T19:16:41Z, }, }, { Phase: "Scheduling", PhaseTransitionTimestamp: { Time: 2026-02-24T19:16:41Z, }, }, { Phase: "Scheduled", PhaseTransitionTimestamp: { Time: 2026-02-24T19:17:00Z, }, }, { Phase: "Running", PhaseTransitionTimestamp: { Time: 2026-02-24T19:17:01Z, }, }, { Phase: "Failed", PhaseTransitionTimestamp: { Time: 2026-02-24T19:17:07Z, }, }, ], Interfaces: [ { IP: "", MAC: "0a:00:dc:cf:f1:45", Name: "default", IPs: nil, PodInterfaceName: "eth0", InterfaceName: "", InfoSource: "domain", QueueCount: 1, LinkState: "up", }, ], GuestOSInfo: {Name: "", KernelRelease: "", Version: "", PrettyName: "", VersionID: "", KernelVersion: "", Machine: "", ID: ""}, MigrationState: nil, MigrationMethod: "BlockMigration", MigrationTransport: "Unix", QOSClass: "Burstable", LauncherContainerImageVersion: "registry:5000/kubevirt/virt-launcher:devel", EvacuationNodeName: "", ActivePods: { "5fa12f28-f1b5-41e8-b6c6-c5e540b39756": "kind-1.34-worker", }, VolumeStatus: [ { Name: "disk0", Target: "vda", Phase: "", Reason: "", Message: "", PersistentVolumeClaimInfo: nil, HotplugVolume: nil, Size: 0, MemoryDumpVolume: nil, ContainerDiskVolume: {Checksum: 1068092945}, }, ], KernelBootStatus: nil, FSFreezeStatus: "", TopologyHints: nil, VirtualMachineRevisionName: "", RuntimeUser: 107, VSOCKCID: nil, SelinuxContext: "none", Machine: { Type: "virt-rhel9.8.0", }, CurrentCPUTopology: {Cores: 1, Sockets: 1, Threads: 1}, Memory: { GuestAtBoot: { i: {value: 268435456, scale: 0}, d: {Dec: nil}, s: "", Format: "BinarySI", }, GuestCurrent: { i: {value: 268435456, scale: 0}, d: {Dec: nil}, s: "", Format: "BinarySI", }, GuestRequested: { i: {value: 268435456, scale: 0}, d: {Dec... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output tests/vmi_lifecycle_test.go:1655
compute pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16687/pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64/2025859957387169792
Test Name Failure Message
[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]Creating a VirtualMachineInstance with affinity [test_id:1638]the vmi with node affinity and anti-pod affinity should not be scheduled tests/vmi_lifecycle_test.go:837 Timed out after 60.000s. Timed out waiting for VMI testvmi-t52ff to enter [Scheduled Running] phase(s) Expected <v1.VirtualMachineInstancePhase>: Scheduling to be an element of <[]v1.VirtualMachineInstancePhase | len:2, cap:2>: ["Scheduled", "Running"] tests/libwait/wait.go:77
[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]Delete a VirtualMachineInstance with ACPI and some grace period seconds [rfe_id:273][crit:medium][vendor:cnv-qe@redhat.com][level:component]should result in vmi status succeeded [test_id:1653]with set grace period seconds tests/vmi_lifecycle_test.go:1624 Timed out after 10.001s. VirtualMachineInstance/testvmi-n8wkz expected phase is 'Succeeded' but got 'Running' tests/vmi_lifecycle_test.go:1621
compute pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16384/pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64/2026327491408302080
Test Name Failure Message
[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]Creating a VirtualMachineInstance when virt-handler is responsive [test_id:1633]should indicate that a node is ready for vmis tests/tests_suite_test.go:109 Timed out after 10.000s. Unexpected error: <*errors.errorString | 0x4003003300>: failed to call healthz endpoint: error upgrading connection: unable to upgrade connection: pod not found ("virt-api-5b85c98865-68wpw_kubevirt"), component: "virt-api", pod: "virt-api-5b85c98865-68wpw" { s: "failed to call healthz endpoint: error upgrading connection: unable to upgrade connection: pod not found (\"virt-api-5b85c98865-68wpw_kubevirt\"), component: \"virt-api\", pod: \"virt-api-5b85c98865-68wpw\"", } occurred tests/tests_suite_test.go:205
compute pull-kubevirt-e2e-k8s-1.33-sig-compute
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16846/pull-kubevirt-e2e-k8s-1.33-sig-compute/2024400360596049920
Test Name Failure Message
[sig-compute] [rfe_id:127][posneg:negative][crit:medium][vendor:cnv-qe@redhat.com][level:component]Console [rfe_id:127][posneg:negative][crit:medium][vendor:cnv-qe@redhat.com][level:component]A new VirtualMachineInstance without a serial console [test_id:4118]should run but not be connectable via the serial console tests/compute/console.go:129 Unexpected Warning event received: testvmi-wptww,2f110c0b-b9dd-4b29-8788-e6c3a0d9492f: server error. command SyncVMI failed: "LibvirtError(Code=1, Domain=0, Message='An error occurred, but the cause is unknown')" Expected <string>: Warning not to equal <string>: Warning tests/watcher/watcher.go:195
compute pull-kubevirt-e2e-k8s-1.33-sig-compute
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16821/pull-kubevirt-e2e-k8s-1.33-sig-compute/2025846269896822784
Test Name Failure Message
[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]Creating a VirtualMachineInstance [test_id:1622]should log libvirtd logs tests/vmi_lifecycle_test.go:178 Timed out after 11.000s. Expected <string>: {"component":"virt-launcher","level":"info","msg":"Collected all requested hook sidecar sockets","pos":"manager.go:88","timestamp":"2026-02-23T08:31:29.308511Z"} {"component":"virt-launcher","level":"info","msg":"Sorted all collected sidecar sockets per hook point based on their priority and name: map[]","pos":"manager.go:91","timestamp":"2026-02-23T08:31:29.308557Z"} {"component":"virt-launcher","level":"info","msg":"Connecting to libvirt daemon: qemu+unix:///session?socket=/var/run/libvirt/virtqemud-sock","pos":"libvirt.go:682","timestamp":"2026-02-23T08:31:29.308766Z"} {"component":"virt-launcher","level":"info","msg":"Connected to libvirt daemon","pos":"libvirt.go:697","timestamp":"2026-02-23T08:31:29.812304Z"} {"component":"virt-launcher","level":"info","msg":"Registered libvirt event notify callback","pos":"client.go:631","timestamp":"2026-02-23T08:31:29.819359Z"} {"component":"virt-launcher","level":"info","msg":"Marked as ready","pos":"virt-launcher.go:82","timestamp":"2026-02-23T08:31:29.820153Z"} {"component":"virt-launcher","kind":"","level":"info","msg":"Executing PreStartHook on VMI pod environment","name":"testvmi-fb92r","namespace":"kubevirt-test-default1","pos":"manager.go:735","timestamp":"2026-02-23T08:31:33.260205Z","uid":"087b573d-d94f-485b-890f-bf3583ed4a2c"} {"component":"virt-launcher","kind":"","level":"info","msg":"Starting PreCloudInitIso hook","name":"testvmi-fb92r","namespace":"kubevirt-test-default1","pos":"manager.go:744","timestamp":"2026-02-23T08:31:33.260552Z","uid":"087b573d-d94f-485b-890f-bf3583ed4a2c"} {"component":"virt-launcher","level":"info","msg":"Found IPv4 nameservers in /etc/resolv.conf: 10.96.0.10","pos":"resolveconf.go:185","timestamp":"2026-02-23T08:31:33.262105Z"} {"component":"virt-launcher","level":"info","msg":"Found IPv6 nameservers in /etc/resolv.conf: ","pos":"resolveconf.go:186","timestamp":"2026-02-23T08:31:33.262179Z"} {"component":"virt-launcher","level":"info","msg":"Found search domains in /etc/resolv.conf: kubevirt-test-default1.svc.cluster.local svc.cluster.local cluster.local","pos":"resolveconf.go:187","timestamp":"2026-02-23T08:31:33.262205Z"} {"component":"virt-launcher","level":"info","msg":"Starting SingleClientDHCPServer","pos":"server.go:65","timestamp":"2026-02-23T08:31:33.262333Z"} {"component":"virt-launcher","level":"info","msg":"Driver cache mode for /var/run/kubevirt-ephemeral-disks/disk-data/disk0/disk.qcow2 set to none","pos":"converter.go:486","timestamp":"2026-02-23T08:31:33.277861Z"} {"component":"virt-launcher","kind":"","level":"info","msg":"Allocating 3 hotplug ports","name":"testvmi-fb92r","namespace":"kubevirt-test-default1","pos":"manager.go:1412","timestamp":"2026-02-23T08:31:33.291088Z","uid":"087b573d-d94f-485b-890f-bf3583ed4a2c"} {"component":"virt-launcher","kind":"","level":"info","msg":"Domain XML generated. Base64 dump PGRvbWFpbiB0eXBlPSJrdm0iIHhtbG5zOnFlbXU9Imh0dHA6Ly9saWJ2aXJ0Lm9yZy9zY2hlbWFzL2RvbWFpbi9xZW11LzEuMCI+Cgk8bmFtZT5rdWJldmlydC10ZXN0LWRlZmF1bHQxX3Rlc3R2bWktZmI5MnI8L25hbWU+Cgk8bWVtb3J5IHVuaXQ9ImIiPjEzNDIxNzcyODwvbWVtb3J5PgoJPG9zPgoJCTx0eXBlIGFyY2g9Ing4Nl82NCIgbWFjaGluZT0icTM1Ij5odm08L3R5cGU+CgkJPHNtYmlvcyBtb2RlPSJzeXNpbmZvIj48L3NtYmlvcz4KCTwvb3M+Cgk8c3lzaW5mbyB0eXBlPSJzbWJpb3MiPgoJCTxzeXN0ZW0+CgkJCTxlbnRyeSBuYW1lPSJ1dWlkIj45YTcwNTQyZi0zMzVlLTQyM2MtYjE1Ny05ZTY5YmZiZjhkYjk8L2VudHJ5PgoJCQk8ZW50cnkgbmFtZT0ibWFudWZhY3R1cmVyIj5LdWJlVmlydDwvZW50cnk+CgkJCTxlbnRyeSBuYW1lPSJmYW1pbHkiPkt1YmVWaXJ0PC9lbnRyeT4KCQkJPGVudHJ5IG5hbWU9InByb2R1Y3QiPk5vbmU8L2VudHJ5PgoJCQk8ZW50cnkgbmFtZT0ic2t1Ij48L2VudHJ5PgoJCQk8ZW50cnkgbmFtZT0idmVyc2lvbiI+PC9lbnRyeT4KCQk8L3N5c3RlbT4KCQk8Ymlvcz48L2Jpb3M+CgkJPGJhc2VCb2FyZD48L2Jhc2VCb2FyZD4KCQk8Y2hhc3Npcz48L2NoYXNzaXM+Cgk8L3N5c2luZm8+Cgk8ZGV2aWNlcz4KCQk8aW50ZXJmYWNlIHR5cGU9ImV0aGVybmV0Ij4KCQkJPHNvdXJjZT48L3NvdXJjZT4KCQkJPHRhcmdldCBkZXY9InRhcDAiIG1hbmFnZWQ9Im5vIj48L3RhcmdldD4KCQkJPG1vZGVsIHR5cGU9InZpcnRpby1ub24tdHJhbnNpdGlvbmFsIj48L21vZGVsPgoJCQk8bWFjIGFkZHJl... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output to contain substring <string>: libvirt version: tests/vmi_lifecycle_test.go:186
compute pull-kubevirt-e2e-k8s-1.33-sig-compute
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16786/pull-kubevirt-e2e-k8s-1.33-sig-compute/2026573905086386176
Test Name Failure Message
[rfe_id:899][crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute]Config With a DownwardAPI defined [test_id:790]Should be the namespace and token the same for a pod and vmi tests/config_test.go:652 Unexpected Warning event received: testvmi-r7cpq,a04f53b4-a183-4baf-acf1-832b4b788cc1: server error. command SyncVMI failed: "creating DownwardAPI disks failed: exit status 5" Expected <string>: Warning not to equal <string>: Warning tests/watcher/watcher.go:195
compute pull-kubevirt-e2e-k8s-1.33-sig-compute
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16659/pull-kubevirt-e2e-k8s-1.33-sig-compute/2021158396387921920
Test Name Failure Message
[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]Delete a VirtualMachineInstance with grace period greater than 0 [test_id:1655]should run graceful shutdown tests/vmi_lifecycle_test.go:1623 Timed out after 15.000s. expected object to be gone, but it still exists: *v1.VirtualMachineInstance metadata: <v1.ObjectMeta>: { Name: testvmi-gskfm, GenerateName: , Namespace: kubevirt-test-default4, SelfLink: , UID: e4be22a1-45de-4059-929f-8be7c09bf6c7, ResourceVersion: 44794, Generation: 11, CreationTimestamp: { Time: 2026-02-10T10:56:42Z, }, DeletionTimestamp: { Time: 2026-02-10T10:56:57Z, }, DeletionGracePeriodSeconds: 0, Labels: { "kubevirt.io/nodeName": "node02", }, Annotations: { "kubevirt.io/created-by-test": "[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]Delete a VirtualMachineInstance with grace period greater than 0 [test_id:1655]should run graceful shutdown", "kubevirt.io/latest-observed-api-version": "v1", "kubevirt.io/storage-observed-api-version": "v1", }, OwnerReferences: nil, Finalizers: [ "kubevirt.io/foregroundDeleteVirtualMachine", ], ManagedFields: nil, } status: <v1.VirtualMachineInstanceStatus>: { NodeName: node02, Reason: , Conditions: [ { Type: "Ready", Status: "False", LastProbeTime: { Time: 2026-02-10T10:56:57Z, }, LastTransitionTime: { Time: 2026-02-10T10:56:57Z, }, Reason: "PodTerminating", Message: "virt-launcher pod is terminating", }, { Type: "LiveMigratable", Status: "False", LastProbeTime: { Time: 0001-01-01T00:00:00Z, }, LastTransitionTime: { Time: 0001-01-01T00:00:00Z, }, Reason: "InterfaceNotLiveMigratable", Message: "cannot migrate VMI which does not use masquerade or a migratable plugin to connect to the pod network", }, { Type: "StorageLiveMigratable", Status: "False", LastProbeTime: { Time: 0001-01-01T00:00:00Z, }, LastTransitionTime: { Time: 0001-01-01T00:00:00Z, }, Reason: "NotMigratable", Message: "InterfaceNotLiveMigratable: cannot migrate VMI which does not use masquerade or a migratable plugin to connect to the pod network", }, ], Phase: Failed, PhaseTransitionTimestamps: [ { Phase: "Pending", PhaseTransitionTimestamp: { Time: 2026-02-10T10:56:42Z, }, }, { Phase: "Scheduling", PhaseTransitionTimestamp: { Time: 2026-02-10T10:56:42Z, }, }, { Phase: "Scheduled", PhaseTransitionTimestamp: { Time: 2026-02-10T10:56:52Z, }, }, { Phase: "Running", PhaseTransitionTimestamp: { Time: 2026-02-10T10:56:56Z, }, }, { Phase: "Failed", PhaseTransitionTimestamp: { Time: 2026-02-10T10:57:03Z, }, }, ], Interfaces: [ { IP: "", MAC: "ce:79:08:2e:b2:c6", Name: "default", IPs: nil, PodInterfaceName: "eth0", InterfaceName: "", InfoSource: "domain", QueueCount: 1, LinkState: "up", }, ], GuestOSInfo: {Name: "", KernelRelease: "", Version: "", PrettyName: "", VersionID: "", KernelVersion: "", Machine: "", ID: ""}, MigrationState: nil, MigrationMethod: BlockMigration, MigrationTransport: Unix, QOSClass: Burstable, LauncherContainerImageVersion: registry:5000/kubevirt/virt-launcher:devel, EvacuationNodeName: , ActivePods: { "3a91d229-3f29-4e0c-88da-b084f3e9be74": "node02", }, VolumeStatus: [ { Name: "disk0", Target: "vda", Phase: "", Reason: "", Message: "", PersistentVolumeClaimInfo: nil, HotplugVolume: nil, Size: 0, MemoryDumpVolume: nil, ContainerDiskVolume: {Checksum: 538764798}, }, ], KernelBootStatus: nil, FSFreezeStatus: , TopologyHints: nil, VirtualMachineRevisionName: , RuntimeUser: 107, VSOCKCID: nil, SelinuxContext: system_u:object_r:container_file_t:s0:c12,c51, Machine: { Type: "pc-q35-rhel9.8.0", }, CurrentCPUTopology: {Cores: 1, Sockets: 1, Threads: 1}, Memory: { GuestAtBoot: { i: {value: 134217728, scale: 0}, d: {Dec: nil}, s: "", Format: "BinarySI", }, GuestCurrent: { i: {value: 134217728, scale: 0}, d: {Dec: nil}, s: "", Format: "BinarySI", }, GuestRequested: { i: {value: 134217728, scale: 0}, d: {Dec... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output tests/vmi_lifecycle_test.go:1649
compute pull-kubevirt-e2e-k8s-1.35-sig-compute
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16865/pull-kubevirt-e2e-k8s-1.35-sig-compute/2026630210253754368
Test Name Failure Message
[sig-compute]VM state with persistent TPM VM option enabled should persist VM state of TPM across migration and restart tests/vm_state_test.go:199 Timed out after 240.000s. Expected <[]interface {} | len:3, cap:3>: [ <map[string]interface {} | len:4>{ "lastProbeTime": nil, "lastTransitionTime": <string>"2026-02-25T13:47:58Z", "type": <string>"Ready", "status": <string>"True", }, <map[string]interface {} | len:4>{ "type": <string>"LiveMigratable", "status": <string>"True", "lastProbeTime": nil, "lastTransitionTime": nil, }, <map[string]interface {} | len:4>{ "lastProbeTime": nil, "lastTransitionTime": nil, "type": <string>"StorageLiveMigratable", "status": <string>"True", }, ] expected condition of type 'AgentConnected' was not found tests/vm_state_test.go:61
compute pull-kubevirt-e2e-k8s-1.35-sig-compute
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16528/pull-kubevirt-e2e-k8s-1.35-sig-compute/2025677326737477632
Test Name Failure Message
[rfe_id:899][crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute]Config With a DownwardAPI defined [test_id:790]Should be the namespace and token the same for a pod and vmi tests/config_test.go:652 Unexpected Warning event received: testvmi-qx98s,5af1d16d-1cfc-4bfe-891f-c44fed646715: server error. command SyncVMI failed: "creating DownwardAPI disks failed: exit status 32" Expected <string>: Warning not to equal <string>: Warning tests/watcher/watcher.go:195
compute pull-kubevirt-e2e-k8s-1.34-sig-compute
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16684/pull-kubevirt-e2e-k8s-1.34-sig-compute/2016509309629763584
Test Name Failure Message
[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle Softreboot a VirtualMachineInstance soft reboot vmi should fail to soft reboot a paused vmi tests/vmi_lifecycle_test.go:1438 Expected success, but got an error: <expect.TimeoutError>: expect: timer expired after 120 seconds 120000000000 tests/vmi_lifecycle_test.go:1759
compute pull-kubevirt-e2e-k8s-1.34-sig-compute
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/15958/pull-kubevirt-e2e-k8s-1.34-sig-compute/2026751481796890624
Test Name Failure Message
[rfe_id:588][crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute]ContainerDisk [rfe_id:273][crit:medium][vendor:cnv-qe@redhat.com][level:component]Starting with virtio-win with virtio-win as secondary disk [test_id:1467]should boot and have the virtio as sata CDROM tests/container_disk_test.go:145 expected alpine to login properly Expected success, but got an error: <expect.TimeoutError>: expect: timer expired after 180 seconds 180000000000 tests/container_disk_test.go:152
compute pull-kubevirt-e2e-k8s-1.34-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16796/pull-kubevirt-e2e-k8s-1.34-sig-compute-serial/2023468054385528832
Test Name Failure Message
[sig-compute]Configurations [rfe_id:897][crit:medium][vendor:cnv-qe@redhat.com][level:component]VirtualMachineInstance with CPU pinning cpu pinning with fedora images, dedicated and non dedicated cpu should be possible on same node via spec.domain.cpu.cores [test_id:829]should start a vm with no cpu pinning after a vm with cpu pinning on same node tests/vmi_configuration_test.go:2127 Expected success, but got an error: <expect.TimeoutError>: expect: timer expired after 120 seconds 120000000000 tests/vmi_configuration_test.go:2148
compute pull-kubevirt-e2e-k8s-1.34-sig-compute-serial
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16604/pull-kubevirt-e2e-k8s-1.34-sig-compute-serial/2021999922412261376
Test Name Failure Message
[sig-compute]HookSidecars [rfe_id:2667][crit:medium][vendor:cnv-qe@redhat.com][level:component] VMI definition set sidecar resources [test_id:3155]should successfully start with hook sidecar annotation tests/vmi_hook_sidecar_test.go:96 Timed out after 300.324s. One of the Kubevirt control-plane components is not ready. The function passed to Eventually failed at tests/testsuite/fixture.go:192 with: Unexpected error: <*rest.wrapPreviousError | 0xc007408040>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt": dial tcp 127.0.0.1:43491: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:52234->127.0.0.1:43491: read: connection reset by peer { currentErr: <*url.Error | 0xc0096ebce0>{ Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/namespaces/kubevirt/kubevirts/kubevirt", Err: <*net.OpError | 0xc0056301e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002786de0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc007408000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc0061eed70>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc002786690>{IP: [127, 0, 0, 1], Port: 52234, Zone: ""}, Addr: <*net.TCPAddr | 0xc0027866c0>{IP: [127, 0, 0, 1], Port: 43491, Zone: ""}, Err: <*os.SyscallError | 0xc0038687a0>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred At one point, however, the function did return successfully. Yet, Eventually failed because the matcher was not satisfied: Expected <*v1.KubeVirt | 0xc0038cd408>: { TypeMeta: { Kind: "KubeVirt", APIVersion: "kubevirt.io/v1", }, ObjectMeta: { Name: "kubevirt", GenerateName: "", Namespace: "kubevirt", SelfLink: "", UID: "8076485f-99d9-400e-981a-ae5eca7fc405", ResourceVersion: "62167", Generation: 118, CreationTimestamp: { Time: 2026-02-12T18:15:24Z, }, DeletionTimestamp: nil, DeletionGracePeriodSeconds: nil, Labels: nil, Annotations: { "kubevirt.io/latest-observed-api-version": "v1", "kubevirt.io/storage-observed-api-version": "v1", }, OwnerReferences: nil, Finalizers: [ "foregroundDeleteKubeVirt", ], ManagedFields: [ { Manager: "kubectl-create", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-12T18:15:24Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:spec\":{\".\":{},\"f:certificateRotateStrategy\":{},\"f:configuration\":{},\"f:customizeComponents\":{},\"f:imagePullPolicy\":{},\"f:workloadUpdateStrategy\":{}}}", }, Subresource: "", }, { Manager: "virt-operator", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-12T18:16:08Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:kubevirt.io/latest-observed-api-version\":{},\"f:kubevirt.io/storage-observed-api-version\":{}},\"f:finalizers\":{\".\":{},\"v:\\\"foregroundDeleteKubeVirt\\\"\":{}}}}", }, Subresource: "", }, { Manager: "virt-controller", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-12T18:17:03Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\"f:outdatedVirtualMachineInstanceWorkloads\":{}}}", }, Subresource: "status", }, { Manager: "tests.test", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-12T19:48:18Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:spec\":{\"f:configuration\":{\"f:changedBlockTrackingLabelSelectors\":{\".\":{},\"f:namespaceLabelSelector\":{},\"f:virtualMachineLabelSelector\":{}},\"f:developerConfiguration\":{\".\":{},\"f:featureGates\":{}},\"f:imagePullPolicy\":{},\"f:seccompConfiguration\":{\".\":{},\"f:virtualMachineInstanceProfile\":{\".\":{},\"f:customProfile\":{\".\":{},\"f:localhostProfile\":{}}}}}}}", }, Subresource: "", }, { Manager: "virt-operator", Operation: "Update", APIVersion: "kubevirt.io/v1", Time: { Time: 2026-02-12T19:48:32Z, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\".\":{},\"f:conditions\":{},\"f:defaultArchitecture\":{},\"f:generations\":{},\"f:observedDeploymentConfig\":{},\"f:observedDeploymentID\":{},\"f:observedGeneration\":{},\"f:observedKubeVirtRegistry\":{},\"f:observedKubeVirtVersion\":{},\"f:operatorVersion\":{},\"f:phase\":{},\"f:synchronizationAddresses\":{},\"f:targetDeploymentConfig\":{},\"f:targetDeploymentID\":{},\"f:targetKubeVirtRegistry\":{},\"f:targetKubeVirtVersion\":{}}}", }, Subresource: "status", }, ], }, Spec: { ImageTag: "", ImageRegistry: "", ImagePullPolicy: "IfNotPresent", ImagePullSecrets: nil, MonitorNamespace: "", ServiceMonitorNamespace: "", MonitorAccount: "", WorkloadUpdateStrategy: { WorkloadUpdateMethods: nil, BatchEvictionSize: nil, BatchEvictionInterval: nil, }, UninstallStrategy: "", CertificateRotationStrategy: {SelfSigned: nil}, ProductVersion: "", ProductName: "", ProductComponent: "", SynchronizationPort: "", Configuration: { CPUModel: "", CPURequest: nil, DeveloperConfiguration: { FeatureGates: [ "NodeRestriction", "CPUManager", "ExperimentalIgnitionSupport", "Sidecar", "Snapshot", "IncrementalBackup", "HostDisk", "EnableVirtioFsStorageVolumes", "DownwardMetrics", "WorkloadEncryptionSEV", "VMExport", "KubevirtSeccompProfile", "ObjectGraph", "DeclarativeHotplugVolumes", "NodeRestriction", "DecentralizedLiveMigration", "PanicDevices", "VideoConfig", "UtilityVolumes", "MigrationPriorityQueue", "RebootPolicy", ], DisabledFeatureGates: nil, LessPVCSpaceToleration: 0, MinimumReservePVCBytes: 0, MemoryOvercommit: 0, NodeSelectors: nil, UseEmulation: false, CPUAllocationRatio: 0, MinimumClusterTSCFrequency: nil, DiskVerification: nil, LogVerbosity: nil, ClusterProfiler: false, }, EmulatedMachines: nil, ImagePullPolicy: "IfNotPresent", MigrationConfiguration: nil, MachineType: "", NetworkConfiguration: nil, OVMFPath: "", SELinuxLauncherType: "", DefaultRuntimeClass: "", SMBIOSConfig: nil, ArchitectureConfiguration: nil, EvictionStrategy: nil, AdditionalGuestMemoryOverheadRatio: nil, SupportContainerResources: nil, SupportedGuestAgentVersions: nil, MemBalloonStatsPeriod: nil, PermittedHostDevices: nil, MediatedDevicesConfiguration: nil, DeprecatedMinCPUModel: "", ObsoleteCPUModels: nil, VirtualMachineInstancesPerNode: nil, APIConfiguration: nil, WebhookConfiguration: nil, ControllerConfiguration: nil, HandlerConfiguration: nil, TLSConfiguration: nil, SeccompConfiguration: { VirtualMachineInstanceProfile: { CustomProfile: { LocalhostProfile: "kubevirt/kubevirt.json", RuntimeDefaultProfile: false, }, }, }, VMStateStorageClass: "", VirtualMachineOptions: nil, KSMConfiguration: nil, AutoCPULimitNamespaceLabelSelector: nil, LiveUpdateConfiguration: nil, VMRolloutStrategy: nil, CommonInstancetypesDeployment: nil, Instancetype: nil, Hypervisors: nil, ChangedBlockTrackingLabelSelectors: { NamespaceLabelSelector: { MatchLabels: { "changedBlockTracking": "true", }, MatchExpressions: nil, }, VirtualMachineLabelSelector: { MatchLabels: { "changedBlockTracking": "true", }, MatchExpressions: nil, }, }, }, Infra: nil, Workloads: nil, CustomizeComponents: {Patches: nil, Flags: nil}, }, Status: { Phase: "Deployed", Conditions: [ { Type: "Available", Status: "True", LastProbeTime: { Time: 2026-02-12T19:48:27Z, }, LastTransitionTime: { Time: 2026-02-12T19:48:27Z, }, Reason: "AllComponentsReady", Message: "All components are ready.", }, { Type: "Progressing", Status: "False", LastProbeTime: { Time: 2026-02-12T19:48:27Z, }, LastTransitionTime: { Time: 2026-02-12T19:48:27Z, }, Reason: "AllComponentsReady", Message: "All components are ready.", }, { Type: "Degraded", Status: "False", LastProbeTime: { Time: 2026-02-12T19:48:27Z, }, LastTransitionTime: { Time: 2026-02-12T19:48:27Z, }, Reason: "AllComponentsReady", Message: "All components are ready.", }, { Type: "Created", Status: "True", LastProbeTime: { Time: 2026-02-12T18:16:58Z, }, LastTransitionTime: { Time: 0001-01-01T00:00:00Z, }, Reason: "AllResourcesCreated", Message: "All resources were created.", }, ], OperatorVersion: "v1.8.0-beta.0.158+2ed15d86deda0c", TargetKubeVirtRegistry: "registry:5000/kubevirt", TargetKubeVirtVersion: "devel", TargetDeploymentConfig: "{\"id\":\"1f6b1b301a6a4aaffe43bcdf381bc788c3d23b99\",\"namespace\":\"kubevirt\",\"registry\":\"registry:5000/kubevirt\",\"kubeVirtVersion\":\"devel\",\"virtOperatorImage\":\"registry:5000/kubevirt/virt-operator:devel\",\"additionalProperties\":{\"CertificateRotationStrategy\":\"\\u003cv1.KubeVirtCertificateRotateStrategy Value\\u003e\",\"Configuration\":\"\\u003cv1.KubeVirtConfiguration Value\\u003e\",\"CustomizeComponents\":\"\\u003cv1.CustomizeComponents Value\\u003e\",\"ImagePullPolicy\":\"IfNotPresent\",\"ImagePullSecrets\":\"null\",\"Infra\":\"\\u003c*v1.ComponentConfig Value\\u003e\",\"MonitorAccount\":\"\",\"MonitorNamespace\":\"\",\"ProductComponent\":\"\",\"ProductName\":\"\",\"ProductVersion\":\"\",\"ServiceMonitorNamespace\":\"\",\"SynchronizationPort\":\"\",\"UninstallStrategy\":\"\",\"WorkloadUpdateStrategy\":\"\\u003cv1.KubeVirtWorkloadUpdateStrategy Value\\u003e\",\"Workloads\":\"\\u003c*v1.ComponentConfig Value\\u003e\"}}", TargetDeploymentID: "1f6b1b301a6a4aaffe43bcdf381bc788c3d23b99", ObservedKubeVirtRegistry: "registry:5000/kubevirt", ObservedKubeVirtVersion: "devel", ObservedDeploymentConfig: "{\"id\":\"1f6b1b301a6a4aaffe43bcdf381bc788c3d23b99\",\"namespace\":\"kubevirt\",\"registry\":\"registry:5000/kubevirt\",\"kubeVirtVersion\":\"devel\",\"virtOperatorImage\":\"registry:5000/kubevirt/virt-operator:devel\",\"additionalProperties\":{\"CertificateRotationStrategy\":\"\\u003cv1.KubeVirtCertificateRotateStrategy Value\\u003e\",\"Configuration\":\"\\u003cv1.KubeVirtConfiguration Value\\u003e\",\"CustomizeComponents\":\"\\u003cv1.CustomizeComponents Value\\u003e\",\"ImagePullPolicy\":\"IfNotPresent\",\"ImagePullSecrets\":\"null\",\"Infra\":\"\\u003c*v1.ComponentConfig Value\\u003e\",\"MonitorAccount\":\"\",\"MonitorNamespace\":\"\",\"ProductComponent\":\"\",\"ProductName\":\"\",\"ProductVersion\":\"\",\"ServiceMonitorNamespace\":\"\",\"SynchronizationPort\":\"\",\"UninstallStrategy\":\"\",\"WorkloadUpdateStrategy\":\"\\u003cv1.KubeVirtWorkloadUpdateStrategy Value\\u003e\",\"Workloads\":\"\\u003c*v1.ComponentConfig Value\\u003e\"}}", ObservedDeploymentID: "1f6b1b301a6a4aaffe43bcdf381bc788c3d23b99", OutdatedVirtualMachineInstanceWorkloads: 0, ObservedGeneration: 117, DefaultArchitecture: "amd64", Generations: [ { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineinstances.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineinstancepresets.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineinstancereplicasets.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachines.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineinstancemigrations.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinesnapshots.snapshot.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinesnapshotcontents.snapshot.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinerestores.snapshot.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineinstancetypes.instancetype.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineclusterinstancetypes.instancetype.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinepools.pool.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "migrationpolicies.migrations.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinepreferences.instancetype.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineclusterpreferences.instancetype.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineexports.export.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachineclones.clone.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinebackups.backup.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "apiextensions.k8s.io/v1", Resource: "customresourcedefinitions", Namespace: "", Name: "virtualmachinebackuptrackers.backup.kubevirt.io", LastGeneration: 1, Hash: "", }, { Group: "admissionregistration.k8s.io", Resource: "validatingwebhookconfigurations", Namespace: "", Name: "virt-operator-validator", LastGeneration: 188, Hash: "", }, { Group: "admissionregistration.k8s.io", Resource: "validatingwebhookconfigurations", Namespace: "", Name: "virt-api-validator", LastGeneration: 188, Hash: "", }, { Group: "admissionregistration.k8s.io", Resource: "mutatingwebhookconfigurations", Namespace: "", Name: "virt-api-mutator", LastGeneration: 187, Hash: "", }, { Group: "apps", Resource: "deployments", Namespace: "kubevirt", Name: "virt-api", LastGeneration: 118, Hash: "", }, { Group: "apps", Resource: "poddisruptionbudgets", Namespace: "kubevirt", Name: "virt-api-pdb", LastGeneration: 1, Hash: "", }, { Group: "apps", Resource: "deployments", Namespace: "kubevirt", Name: "virt-controller", LastGeneration: 116, Hash: "", }, { Group: "apps", Resource: "poddisruptionbudgets", Namespace: "kubevirt", Name: "virt-controller-pdb", LastGeneration: 1, Hash: "", }, { Group: "apps", Resource: "daemonsets", Namespace: "kubevirt", Name: "virt-handler", LastGeneration: 3, Hash: "", }, { Group: "apps", Resource: "deployments", Namespace: "kubevirt", Name: "virt-exportproxy", LastGeneration: 4, Hash: "", }, { Group: "apps", Resource: "poddisruptionbudgets", Namespace: "kubevirt", Name: "virt-exportproxy-pdb", LastGeneration: 1, Hash: "", }, { Group: "apps", Resource: "deployments", Namespace: "kubevirt", Name: "virt-synchronization-controller", LastGeneration: 4, Hash: "", }, { Group: "apps", Resource: "poddisruptionbudgets", Namespace: "kubevirt", Name: "virt-synchronization-controller-pdb", LastGeneration: 1, Hash: "", }, ], SynchronizationAddresses: ["10.244.0.64:9185", "fd10:244::40:9185"], }, } to satisfy predicate <func(*v1.KubeVirt) bool>: 0x1ff4ca0 tests/testsuite/fixture.go:194
[sig-compute]HookSidecars [rfe_id:2667][crit:medium][vendor:cnv-qe@redhat.com][level:component] VMI definition with sidecar feature gate disabled [test_id:2666]should not start with hook sidecar annotation tests/vmi_hook_sidecar_test.go:287 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0044c5d40>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00825adc0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00a9024b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc0085aeea0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-network] [crit:high][vendor:cnv-qe@redhat.com][level:component] [crit:high][vendor:cnv-qe@redhat.com][level:component]Creating a VirtualMachineInstance when virt-handler is responsive VMIs shouldn't fail after the kubelet restarts [sig-compute]with default networking tests/network/vmi_lifecycle.go:109 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0090af3e0>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0031eb680>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005a78360>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc007409ac0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[ref_id:2717][sig-compute]KubeVirt control plane resilience pod eviction evicting pods of control plane [test_id:2830]last eviction should fail for multi-replica virt-controller pods tests/virt_control_plane_test.go:135 Should list compute nodeList Unexpected error: <*url.Error | 0xc001c8ef60>: Get "https://127.0.0.1:43491/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue", Err: <*net.OpError | 0xc008e36af0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00296e240>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc004b7e9c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libnode/node.go:298
[ref_id:2717][sig-compute]KubeVirt control plane resilience pod eviction evicting pods of control plane [test_id:2799]last eviction should fail for multi-replica virt-api pods tests/virt_control_plane_test.go:135 Should list compute nodeList Unexpected error: <*url.Error | 0xc0030c9800>: Get "https://127.0.0.1:43491/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/api/v1/nodes?labelSelector=kubevirt.io%2Fschedulable%3Dtrue", Err: <*net.OpError | 0xc0061ee550>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0044c49f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc0043e0c20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libnode/node.go:298
[ref_id:2717][sig-compute]KubeVirt control plane resilience control plane components check when control plane pods are running [test_id:2806]virt-controller and virt-api pods have a pod disruption budget tests/virt_control_plane_test.go:180 Unexpected error: <*url.Error | 0xc0042605a0>: Get "https://127.0.0.1:43491/apis/apps/v1/namespaces/kubevirt/deployments": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/apps/v1/namespaces/kubevirt/deployments", Err: <*net.OpError | 0xc00856ed70>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002350b40>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc00755cc20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/virt_control_plane_test.go:184
[ref_id:2717][sig-compute]KubeVirt control plane resilience control plane components check when Control plane pods temporarily lose connection to Kubernetes API should fail health checks when connectivity is lost, and recover when connectivity is regained tests/virt_control_plane_test.go:240 Unexpected error: <*url.Error | 0xc002b46780>: Get "https://127.0.0.1:43491/apis/apps/v1/namespaces/kubevirt/daemonsets/virt-handler": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/apps/v1/namespaces/kubevirt/daemonsets/virt-handler", Err: <*net.OpError | 0xc0086edd10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008078e10>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc008a9ae60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/virt_control_plane_test.go:241
[sig-compute]virt-handler multiple HTTP calls should re-use connections and not grow the number of open connections tests/virt-handler_test.go:48 Test Panicked tests/libnet/cloudinit/cloudinit.go:192 Panic: failed defining network data when running options: failed defining network data ethernet device when running options: failed defining network data nameservers when retrieving cluster DNS service IP: unable to detect the DNS services: Get "https://127.0.0.1:43491/api/v1/namespaces/kube-system/services/kube-dns": dial tcp 127.0.0.1:43491: connect: connection refused, Get "https://127.0.0.1:43491/api/v1/namespaces/openshift-dns/services/dns-default": dial tcp 127.0.0.1:43491: connect: connection refused Full stack: kubevirt.io/kubevirt/tests/libnet/cloudinit.CreateDefaultCloudInitNetworkData() tests/libnet/cloudinit/cloudinit.go:192 +0x154 kubevirt.io/kubevirt/tests/libnet.WithMasqueradeNetworking(...) tests/libnet/vmibuilder.go:32 tests/go_default_test_test.init.func23.1() tests/virt-handler_test.go:95 +0x3a
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] InstancetypeReferencePolicy should result in running VirtualMachine when set to reference tests/instancetype/reference_policy.go:96 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0036c81b0>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0085ad810>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc006f4fbc0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc00967ce80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] InstancetypeReferencePolicy should result in running VirtualMachine when set to expand tests/instancetype/reference_policy.go:97 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc001906b70>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0025d81e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00588e6c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc004fc2380>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute] InstancetypeReferencePolicy should result in running VirtualMachine when set to expandAll tests/instancetype/reference_policy.go:98 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0058502d0>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc008a27450>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004261740>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc009a13600>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure tls configuration [test_id:9306]should result only connections with the correct client-side tls configurations are accepted by the components tests/infrastructure/tls-configuration.go:56 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc004b85b30>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc007c63900>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000bd8630>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc007de8220>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4136] should find one leading virt-controller and two ready tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc009910780>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00814d770>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0021a7aa0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc00858aae0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4137]should find one leading virt-operator and two ready tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc008079da0>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc008efe8c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0052e8090>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc00967dd20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4138]should be exposed and registered on the metrics endpoint tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc004598d50>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00254abe0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc006f8bbc0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc003f75f40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4139]should return Prometheus metrics tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc000a67050>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc002e8c2d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0024a6720>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc006a185a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should throttle the Prometheus metrics access [test_id:4140] by using IPv4 tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc005f9be00>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc002db8b90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0041d34a0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc0067b50e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should throttle the Prometheus metrics access [test_id:6226] by using IPv6 tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc004087aa0>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc005630dc0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc009907aa0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc003299cc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4141]should include the metrics for a running VM tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0042608d0>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc000365c70>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005de18c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc00354a2e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should expose kubevirt_node_deprecated_machine_types metric tests/infrastructure/prometheus.go:213 Timed out after 10.960s. Unexpected error: <*rest.wrapPreviousError | 0xc00354b2c0>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:52686->127.0.0.1:43491: read: connection reset by peer { currentErr: <*url.Error | 0xc004261830>{ Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00825a6e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002a7b6e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc00354b240>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*net.OpError | 0xc0004dbcc0>{ Op: "read", Net: "tcp", Source: <*net.TCPAddr | 0xc002a7b0b0>{IP: [127, 0, 0, 1], Port: 52686, Zone: ""}, Addr: <*net.TCPAddr | 0xc002a7b110>{IP: [127, 0, 0, 1], Port: 43491, Zone: ""}, Err: <*os.SyscallError | 0xc009a12aa0>{ Syscall: "read", Err: <syscall.Errno>0x68, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] storage flush requests metric tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc008ea4ff0>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc008ad00f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002583e90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc009a13b40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] time spent on cache flushing metric tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc004b851a0>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc00587e690>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005a78570>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc008a9a700>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] I/O read operations metric tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc005f9ade0>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc005798d20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00588f9b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc00777e9e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] I/O write operations metric tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0041d2570>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0001b9590>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002787530>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc0054551a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] storage read operation time metric tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc009cdd9e0>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0004da6e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005851890>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc003299dc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] storage read traffic in bytes metric tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0099065a0>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0072f2910>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002b47740>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc00a10eae0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] storage write operation time metric tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0098fee70>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0094e2000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004506f90>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc009a134a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include the storage metrics for a running VM [test_id:4142] storage write traffic in bytes metric tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc00277b020>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc008efed70>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0053873b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc0044f9720>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include metrics for a running VM [test_id:4143] network metrics tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc005f9a900>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc008e37770>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001ee3bf0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc002965480>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include metrics for a running VM [test_id:4144] memory metrics tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc004086d20>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc008a27540>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0041d33b0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc0067b5540>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include metrics for a running VM [test_id:4553] vcpu wait tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc009cddb60>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc007daedc0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005de0570>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc0074084c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include metrics for a running VM [test_id:4554] vcpu seconds tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc009906330>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc0031eab90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00a903f50>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc0047eb4a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints should include metrics for a running VM [test_id:4556] vmi unused memory tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc009b43350>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc008ad0f50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0098fe690>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc003299980>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4146]should include VMI phase metrics for all running VMs tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc001ee3bf0>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc006152690>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc008ea4f00>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc007118760>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints VMI eviction blocker status should include VMI eviction blocker status for all running VMs [test_id:4148] by IPv4 tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc00820c7b0>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc009e97450>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0058d98f0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc008af5540>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints VMI eviction blocker status should include VMI eviction blocker status for all running VMs [test_id:6243] by IPv6 tests/infrastructure/prometheus.go:213 Timed out after 10.002s. Unexpected error: <*url.Error | 0xc0058d9950>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": net/http: TLS handshake timeout { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <http.tlsHandshakeTimeoutError>{}, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4147]should include kubernetes labels to VMI metrics tests/infrastructure/prometheus.go:213 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0041d3770>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc005799770>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005528510>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc009891340>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute] Infrastructure [rfe_id:3187][crit:medium][vendor:cnv-qe@redhat.com][level:component]Prometheus Endpoints [test_id:4555]should include swap metrics tests/infrastructure/prometheus.go:213 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc0096ea960>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc005631770>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002534330>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc006a19240>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[rfe_id:588][crit:medium][vendor:cnv-qe@redhat.com][level:component][sig-compute]ContainerDisk [rfe_id:273][crit:medium][vendor:cnv-qe@redhat.com][level:component]Starting a VirtualMachineInstance should obey the disk verification limits in the KubeVirt CR [test_id:7182]disk verification should fail when the memory limit is too low tests/container_disk_test.go:102 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc0057f44e0>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc007c634f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00a9027e0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc00755d620>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
[sig-compute]VM Rollout Strategy When using the Stage rollout strategy [test_id:11207]should set RestartRequired when changing any spec field tests/hotplug/rolloutstrategy.go:38 Timed out after 10.000s. Unexpected error: <*url.Error | 0xc008ea4990>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc007c632c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0057f5a10>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc008f1f340>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
AfterSuite tests/tests_suite_test.go:107 Timed out after 10.001s. Unexpected error: <*url.Error | 0xc003573260>: Get "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts": dial tcp 127.0.0.1:43491: connect: connection refused { Op: "Get", URL: "https://127.0.0.1:43491/apis/kubevirt.io/v1/kubevirts", Err: <*net.OpError | 0xc009e96460>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000e80e70>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 127, 0, 0, 1], Port: 43491, Zone: "", }, Err: <*os.SyscallError | 0xc008af4800>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } occurred tests/libkubevirt/kubevirt.go:49
compute pull-kubevirt-e2e-kind-1.35-vgpu
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16880/pull-kubevirt-e2e-kind-1.35-vgpu/2025894847717576704
Test Name Failure Message
[sig-compute]MediatedDevices with mediated devices configuration Should successfully passthrough a mediated device tests/mdev_configuration_allocation_test.go:249 Timed out after 120.003s. wait for the kubelet to stop promoting unconfigured devices Expected <int64>: 16 to be zero-valued tests/mdev_configuration_allocation_test.go:250
compute pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64-1.7
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/16793/pull-kubevirt-e2e-k8s-1.34-sig-compute-arm64-1.7/2021287731635687424
Test Name Failure Message
[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]Delete a VirtualMachineInstance with grace period greater than 0 [test_id:1655]should run graceful shutdown tests/vmi_lifecycle_test.go:1608 Timed out after 15.001s. expected object to be gone, but it still exists: *v1.VirtualMachineInstance metadata: <v1.ObjectMeta>: { Name: testvmi-sthw2, GenerateName: , Namespace: kubevirt-test-default1, SelfLink: , UID: c0379eb9-111f-41a7-b704-bf51d9e9d2ee, ResourceVersion: 13607, Generation: 10, CreationTimestamp: { Time: 2026-02-10T19:03:12Z, }, DeletionTimestamp: { Time: 2026-02-10T19:03:33Z, }, DeletionGracePeriodSeconds: 0, Labels: { "kubevirt.io/nodeName": "kind-1.34-worker", }, Annotations: { "kubevirt.io/created-by-test": "[rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component][sig-compute]VMIlifecycle [rfe_id:273][crit:high][vendor:cnv-qe@redhat.com][level:component]Delete a VirtualMachineInstance with grace period greater than 0 [test_id:1655]should run graceful shutdown", "kubevirt.io/latest-observed-api-version": "v1", "kubevirt.io/storage-observed-api-version": "v1", }, OwnerReferences: nil, Finalizers: [ "kubevirt.io/foregroundDeleteVirtualMachine", ], ManagedFields: nil, } status: <v1.VirtualMachineInstanceStatus>: { NodeName: kind-1.34-worker, Reason: , Conditions: [ { Type: "Ready", Status: "False", LastProbeTime: { Time: 2026-02-10T19:03:33Z, }, LastTransitionTime: { Time: 2026-02-10T19:03:33Z, }, Reason: "PodTerminating", Message: "virt-launcher pod is terminating", }, { Type: "LiveMigratable", Status: "False", LastProbeTime: { Time: 0001-01-01T00:00:00Z, }, LastTransitionTime: { Time: 0001-01-01T00:00:00Z, }, Reason: "InterfaceNotLiveMigratable", Message: "cannot migrate VMI which does not use masquerade, bridge with kubevirt.io/allow-pod-bridge-network-live-migration VM annotation or a migratable plugin to connect to the pod network", }, { Type: "StorageLiveMigratable", Status: "False", LastProbeTime: { Time: 0001-01-01T00:00:00Z, }, LastTransitionTime: { Time: 0001-01-01T00:00:00Z, }, Reason: "NotMigratable", Message: "InterfaceNotLiveMigratable: cannot migrate VMI which does not use masquerade, bridge with kubevirt.io/allow-pod-bridge-network-live-migration VM annotation or a migratable plugin to connect to the pod network", }, ], Phase: Failed, PhaseTransitionTimestamps: [ { Phase: "Pending", PhaseTransitionTimestamp: { Time: 2026-02-10T19:03:12Z, }, }, { Phase: "Scheduling", PhaseTransitionTimestamp: { Time: 2026-02-10T19:03:12Z, }, }, { Phase: "Scheduled", PhaseTransitionTimestamp: { Time: 2026-02-10T19:03:32Z, }, }, { Phase: "Running", PhaseTransitionTimestamp: { Time: 2026-02-10T19:03:33Z, }, }, { Phase: "Failed", PhaseTransitionTimestamp: { Time: 2026-02-10T19:03:38Z, }, }, ], Interfaces: [ { IP: "", MAC: "1e:21:a8:36:79:f7", Name: "default", IPs: nil, PodInterfaceName: "eth0", InterfaceName: "", InfoSource: "domain", QueueCount: 1, LinkState: "up", }, ], GuestOSInfo: {Name: "", KernelRelease: "", Version: "", PrettyName: "", VersionID: "", KernelVersion: "", Machine: "", ID: ""}, MigrationState: nil, MigrationMethod: BlockMigration, MigrationTransport: Unix, QOSClass: Burstable, LauncherContainerImageVersion: registry:5000/kubevirt/virt-launcher:devel, EvacuationNodeName: , ActivePods: { "04b8b695-b6ac-4563-85fe-bce3ff334e03": "kind-1.34-worker", }, VolumeStatus: [ { Name: "disk0", Target: "vda", Phase: "", Reason: "", Message: "", PersistentVolumeClaimInfo: nil, HotplugVolume: nil, Size: 0, MemoryDumpVolume: nil, ContainerDiskVolume: {Checksum: 1068092945}, }, ], KernelBootStatus: nil, FSFreezeStatus: , TopologyHints: nil, VirtualMachineRevisionName: , RuntimeUser: 107, VSOCKCID: nil, SelinuxContext: none, Machine: { Type: "virt-rhel9.6.0", }, CurrentCPUTopology: {Cores: 1, Sockets: 1, Threads: 1}, Memory: { GuestAtBoot: { i: {value: 268435456, scale: 0}, d: {Dec: nil}, s: "", Format: "BinarySI", }, GuestCurrent: { i: {value: 268435456, scale: 0}, d: {Dec: nil}, s: "... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output tests/vmi_lifecycle_test.go:1634