SIG Failure Report

storage pull-kubevirt-e2e-k8s-1.33-sig-storage-1.6
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/15773/pull-kubevirt-e2e-k8s-1.33-sig-storage-1.6/1972559888345206784
Test Name Failure Message
[sig-storage] Volumes update with migration Update volumes with the migration updateVolumesStrategy should cancel the migration and clear the volume migration state tests/storage/migration.go:758 Timed out after 120.001s. The volumes migrated should be set Expected <[]v1.StorageMigratedVolumeInfo | len:0, cap:0>: nil to contain element matching <v1.StorageMigratedVolumeInfo>: { VolumeName: "volume", SourcePVCInfo: { ClaimName: "test-datavolume-wpn9fngt2twd", AccessModes: nil, VolumeMode: "Filesystem", Capacity: nil, Requests: nil, Preallocated: false, FilesystemOverhead: nil, }, DestinationPVCInfo: { ClaimName: "dest-mrghv", AccessModes: nil, VolumeMode: "Filesystem", Capacity: nil, Requests: nil, Preallocated: false, FilesystemOverhead: nil, }, } tests/storage/migration.go:774
storage pull-kubevirt-e2e-k8s-1.33-sig-storage-1.6
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/15737/pull-kubevirt-e2e-k8s-1.33-sig-storage-1.6/1970931297991790592
Test Name Failure Message
[sig-storage] Export should mark the status phase skipped on VM without volumes tests/storage/export.go:2096 Timed out after 30.001s. Expected <v1beta1.VirtualMachineExportPhase>: Pending to equal <v1beta1.VirtualMachineExportPhase>: Ready tests/storage/export.go:1420
storage pull-kubevirt-e2e-k8s-1.29-sig-storage-1.2
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/15659/pull-kubevirt-e2e-k8s-1.29-sig-storage-1.2/1966221647685881856
Test Name Failure Message
[sig-storage] Export virtctl vmexport command Download a volume with vmexport Download succeeds with an already existing vmexport tests/storage/export.go:2405 Timed out after 30.001s. Expected <*url.Error | 0xc007071ef0>: Get "https://127.0.0.1:42355/volumes/test-datavolume-6qfnsfj62xpw/disk.img.gz": http: server gave HTTP response to HTTPS client { Op: "Get", URL: "https://127.0.0.1:42355/volumes/test-datavolume-6qfnsfj62xpw/disk.img.gz", Err: <*errors.errorString | 0x45275d0>{ s: "http: server gave HTTP response to HTTPS client", }, } to be nil tests/storage/export.go:2430
storage pull-kubevirt-e2e-k8s-1.32-sig-storage
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/15561/pull-kubevirt-e2e-k8s-1.32-sig-storage/1972972045037735936
Test Name Failure Message
[rfe_id:393][crit:high][vendor:cnv-qe@redhat.com][level:system][sig-compute] Live Migration across namespaces container disk should be able to cancel a migration by deleting the migration resource delete target migration tests/migration/namespace.go:631 Timed out after 180.000s. migration test-migration-fx6tl was expected to disappear after 180 seconds, but it did not The matcher passed to Eventually returned the following error: <*errors.errorString | 0xc0090c2810>: Expected an error, got nil { s: "Expected an error, got nil", } tests/migration/namespace.go:620
storage pull-kubevirt-e2e-k8s-1.31-sig-storage-1.6
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/15697/pull-kubevirt-e2e-k8s-1.31-sig-storage-1.6/1968646173824651264
Test Name Failure Message
[sig-storage] Hotplug iothreads should allow adding and removing hotplugged volumes with dedicated IO and auto policy tests/storage/hotplug.go:2005 Timed out after 90.000s. Expected success, but got an error: <expect.TimeoutError>: expect: timer expired after 5 seconds 5000000000 tests/storage/hotplug.go:388
storage pull-kubevirt-e2e-k8s-1.32-sig-storage-1.6
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/15737/pull-kubevirt-e2e-k8s-1.32-sig-storage-1.6/1970931293860401152
Test Name Failure Message
[sig-storage] Hotplug iothreads should allow adding and removing hotplugged volumes with dedicated IO and auto policy tests/storage/hotplug.go:2005 Timed out after 90.001s. Expected success, but got an error: <expect.TimeoutError>: expect: timer expired after 5 seconds 5000000000 tests/storage/hotplug.go:388
storage pull-kubevirt-e2e-k8s-1.27-sig-storage-1.2
https://prow.ci.kubevirt.io//view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/15659/pull-kubevirt-e2e-k8s-1.27-sig-storage-1.2/1966221651053907968
Test Name Failure Message
[sig-storage] Storage Starting a VirtualMachineInstance [rfe_id:3106][crit:medium][vendor:cnv-qe@redhat.com][level:component]with Alpine PVC should be successfully started [Serial]with NFS Disk PVC using ipv4 address of the NFS pod not owned by qemu tests/storage/storage.go:295 Timed out after 180.001s. Timed out waiting for VMI testvmi-56xzv to enter [Running] phase(s) Expected <v1.VirtualMachineInstancePhase>: Scheduled to be an element of <[]v1.VirtualMachineInstancePhase | len:1, cap:1>: ["Running"] tests/libwait/wait.go:76