Class org.broadinstitute.hellbender.tools.spark.pipelines.MarkDuplicatesSparkIntegrationTest
|
100%
successful |
Tests
| Test | Duration | Result |
|---|---|---|
| assertThatExpectedOutputUpdateToggleIsDisabled | 0s | passed |
| testAssertAssumeUnsortedFilesAreQueryGroupedFiles | 0.577s | passed |
| testAssertCorrectSortOrderMultipleBams | 0.125s | passed |
| testAssertCorrectSortOrderMultipleBamsOverriding | 1.503s | passed |
| testBulkFragmentsNoDuplicates | 0.933s | passed |
| testBulkFragmentsWithDuplicates | 1.929s | passed |
| testDifferentChromosomesInOppositeOrder | 0.322s | passed |
| testDuplicateDetectionDataProviderWithMetrics[0](src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/example.chr1.1-1K.unmarkedDups.bam, src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/expected.picard-2.15.0.sorted.chr1.1-1K.unmarkedDups.markDuplicate.metrics) | 0.305s | passed |
| testDuplicateDetectionDataProviderWithMetrics[1](src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/optical_dupes.bam, src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/expected.picard-2.15.0.sorted.optical_dupes.markDuplicate.metrics) | 0.275s | passed |
| testDuplicateDetectionDataProviderWithMetrics[2](src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/inputSingleLibrarySolexa16404.bam, src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/expected.picard-2.15.0.sorted.inputSingleLibrarySolexa16404.metrics) | 0.290s | passed |
| testFlowModeFlag | 1.615s | passed |
| testHashCollisionHandling | 1.058s | passed |
| testMDOrder[0](src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/mdOrderBug.bam, src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/expected.mdOrderBug.bam) | 0.323s | passed |
| testMDOrder[1](src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/mdOrderBug2.bam, src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/expected.mdOrderBug2.bam) | 0.315s | passed |
| testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[0](true, true, true) | 0.322s | passed |
| testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[1](true, false, true) | 0.318s | passed |
| testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[2](false, true, true) | 0.442s | passed |
| testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[3](true, true, false) | 0.323s | passed |
| testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[4](true, false, false) | 0.323s | passed |
| testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[5](false, true, false) | 0.322s | passed |
| testMappedFragmentAndMappedPairFirstOfPairNonPrimary | 0.311s | passed |
| testMappedFragmentAndMatePairFirstUnmapped | 0.318s | passed |
| testMappedFragmentAndMatePairSecondUnmapped | 0.322s | passed |
| testMappedPairAndMappedFragmentAndMatePairFirstUnmapped | 0.313s | passed |
| testMappedPairAndMappedFragmentAndMatePairSecondUnmapped | 0.342s | passed |
| testMappedPairAndMatePairFirstOppositeStrandSecondUnmapped | 0.315s | passed |
| testMappedPairAndMatePairFirstUnmapped | 0.320s | passed |
| testMappedPairAndMatePairSecondUnmapped | 0.313s | passed |
| testMappedPairWithFirstEndSamePositionAndOther | 0.317s | passed |
| testMappedPairWithSamePosition | 0.339s | passed |
| testMappedPairWithSamePositionSameCigar | 0.319s | passed |
| testMarkDuplicatesSparkDeletingDuplicateReads[0]([Ljava.io.File;@809bdfa, 20, 0, {Solexa-16419=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16416=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16404=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16406=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16412=[0, 1, 0, 0, 0, 0, 0.0, 0]}) | 0.374s | passed |
| testMarkDuplicatesSparkDeletingDuplicateReads[1]([Ljava.io.File;@26d95b1e, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.403s | passed |
| testMarkDuplicatesSparkDeletingDuplicateReads[2]([Ljava.io.File;@6847097f, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.382s | passed |
| testMarkDuplicatesSparkDeletingDuplicateReads[3]([Ljava.io.File;@4257b716, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.351s | passed |
| testMarkDuplicatesSparkDeletingDuplicateReads[4]([Ljava.io.File;@5e2e6ce7, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.351s | passed |
| testMarkDuplicatesSparkDeletingDuplicateReads[5]([Ljava.io.File;@42bbf883, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.243s | passed |
| testMarkDuplicatesSparkDeletingDuplicateReads[6]([Ljava.io.File;@746f16c8, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.230s | passed |
| testMarkDuplicatesSparkDeletingOpticalDuplicateReads[0]([Ljava.io.File;@301a8a, 20, 0, {Solexa-16419=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16416=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16404=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16406=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16412=[0, 1, 0, 0, 0, 0, 0.0, 0]}) | 0.377s | passed |
| testMarkDuplicatesSparkDeletingOpticalDuplicateReads[1]([Ljava.io.File;@3256f258, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.384s | passed |
| testMarkDuplicatesSparkDeletingOpticalDuplicateReads[2]([Ljava.io.File;@7702f1f1, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.381s | passed |
| testMarkDuplicatesSparkDeletingOpticalDuplicateReads[3]([Ljava.io.File;@eeffed, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.360s | passed |
| testMarkDuplicatesSparkDeletingOpticalDuplicateReads[4]([Ljava.io.File;@38a0593, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.342s | passed |
| testMarkDuplicatesSparkDeletingOpticalDuplicateReads[5]([Ljava.io.File;@3a182056, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.219s | passed |
| testMarkDuplicatesSparkDeletingOpticalDuplicateReads[6]([Ljava.io.File;@71ee2982, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.223s | passed |
| testMarkDuplicatesSparkIntegrationTestLocal[0]([Ljava.io.File;@61d84f17, 20, 0, {Solexa-16419=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16416=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16404=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16406=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16412=[0, 1, 0, 0, 0, 0, 0.0, 0]}) | 0.355s | passed |
| testMarkDuplicatesSparkIntegrationTestLocal[1]([Ljava.io.File;@d0305da, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.375s | passed |
| testMarkDuplicatesSparkIntegrationTestLocal[2]([Ljava.io.File;@2f026b9c, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.365s | passed |
| testMarkDuplicatesSparkIntegrationTestLocal[3]([Ljava.io.File;@7a313e1, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.341s | passed |
| testMarkDuplicatesSparkIntegrationTestLocal[4]([Ljava.io.File;@7cc82ce6, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.589s | passed |
| testMarkDuplicatesSparkIntegrationTestLocal[5]([Ljava.io.File;@2a02b7f4, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.220s | passed |
| testMarkDuplicatesSparkIntegrationTestLocal[6]([Ljava.io.File;@6870f467, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.216s | passed |
| testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[0]([Ljava.io.File;@6f4e7b, 20, 0, {Solexa-16419=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16416=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16404=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16406=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16412=[0, 1, 0, 0, 0, 0, 0.0, 0]}) | 0.357s | passed |
| testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[1]([Ljava.io.File;@1baed31a, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.369s | passed |
| testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[2]([Ljava.io.File;@1750e9f2, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.363s | passed |
| testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[3]([Ljava.io.File;@3eec1442, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.330s | passed |
| testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[4]([Ljava.io.File;@10c38e4d, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.338s | passed |
| testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[5]([Ljava.io.File;@111d517d, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.212s | passed |
| testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[6]([Ljava.io.File;@60c8c044, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.220s | passed |
| testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[0]([Ljava.io.File;@1c1a5d57, 20, 0, {Solexa-16419=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16416=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16404=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16406=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16412=[0, 1, 0, 0, 0, 0, 0.0, 0]}) | 0.356s | passed |
| testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[1]([Ljava.io.File;@3b9e04c7, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.364s | passed |
| testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[2]([Ljava.io.File;@32f56d21, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.371s | passed |
| testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[3]([Ljava.io.File;@f845be7, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.337s | passed |
| testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[4]([Ljava.io.File;@4d3d773, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.336s | passed |
| testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[5]([Ljava.io.File;@64d0d409, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.223s | passed |
| testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[6]([Ljava.io.File;@3c009401, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.214s | passed |
| testMatePairFirstUnmapped | 0.318s | passed |
| testMatePairSecondUnmapped | 0.327s | passed |
| testNoReadGroupInRead | 0.226s | passed |
| testNonExistantReadGroupInRead | 0.220s | passed |
| testNullOpticalDuplicates | 0.373s | passed |
| testOpticalDuplicateClusterOneEndSamePositionOneCluster | 0.308s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicatesWithinPixelDistance | 0.311s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[0](RUNID:7:1203:2886:82292, RUNID:7:1205:3886:16834) | 0.316s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[10](RUNID:7:1203:6886:16756, RUNID:7:1203:4886:26756) | 0.315s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[11](RUNID:7:1203:2886:34756, RUNID:7:1203:2886:36756) | 0.316s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[12](RUNID:7:1203:2886:35756, RUNID:7:1203:2886:36756) | 0.321s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[13](RUNID:7:1203:2886:37756, RUNID:7:1203:2886:36756) | 0.315s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[14](RUNID:7:1203:2886:38756, RUNID:7:1203:2886:36756) | 0.309s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[15](HK3T5CCXX160204:3:1112:11586:37067, HK3T5CCXX160204:3:1112:11586:32144) | 0.318s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[1](RUNID:7:1203:2886:16756, RUNID:7:1205:3886:16756) | 0.307s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[2](RUNID:7:1204:2886:16756, RUNID:7:1205:3886:16756) | 0.302s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[3](RUNID:7:1205:2886:16756, RUNID:7:1205:3886:16756) | 0.315s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[4](RUNID:7:1206:2886:16756, RUNID:7:1205:3886:16756) | 0.309s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[5](RUNID:7:1207:2886:16756, RUNID:7:1205:3886:16756) | 0.311s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[6](RUNID:7:1203:2886:16756, RUNID:7:1203:4886:26756) | 0.317s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[7](RUNID:7:1203:3886:16756, RUNID:7:1203:4886:26756) | 0.320s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[8](RUNID:7:1203:4886:16756, RUNID:7:1203:4886:26756) | 0.320s | passed |
| testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[9](RUNID:7:1203:5886:16756, RUNID:7:1203:4886:26756) | 0.327s | passed |
| testOpticalDuplicateClusterSamePositionOneOpticalDuplicatesWithinPixelDistance | 0.310s | passed |
| testOpticalDuplicateClustersAddingSecondEndFirstSameCoordinate | 0.314s | passed |
| testOpticalDuplicateFinding | 0.312s | passed |
| testOpticalDuplicateFindingPxelDistance | 0.312s | passed |
| testOpticalDuplicatesAndPCRDuplicatesOrientationDifference | 0.313s | passed |
| testOpticalDuplicatesDifferentReadGroups | 0.313s | passed |
| testOpticalDuplicatesTheSameReadGroup | 0.311s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[0](RUNID:7:1203:2886:82292, RUNID:7:1205:3886:16834) | 0.314s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[10](RUNID:7:1203:6886:16756, RUNID:7:1203:4886:26756) | 0.306s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[11](RUNID:7:1203:2886:34756, RUNID:7:1203:2886:36756) | 0.310s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[12](RUNID:7:1203:2886:35756, RUNID:7:1203:2886:36756) | 0.309s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[13](RUNID:7:1203:2886:37756, RUNID:7:1203:2886:36756) | 0.307s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[14](RUNID:7:1203:2886:38756, RUNID:7:1203:2886:36756) | 0.307s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[15](HK3T5CCXX160204:3:1112:11586:37067, HK3T5CCXX160204:3:1112:11586:32144) | 0.317s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[1](RUNID:7:1203:2886:16756, RUNID:7:1205:3886:16756) | 0.314s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[2](RUNID:7:1204:2886:16756, RUNID:7:1205:3886:16756) | 0.314s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[3](RUNID:7:1205:2886:16756, RUNID:7:1205:3886:16756) | 0.315s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[4](RUNID:7:1206:2886:16756, RUNID:7:1205:3886:16756) | 0.316s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[5](RUNID:7:1207:2886:16756, RUNID:7:1205:3886:16756) | 0.313s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[6](RUNID:7:1203:2886:16756, RUNID:7:1203:4886:26756) | 0.317s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[7](RUNID:7:1203:3886:16756, RUNID:7:1203:4886:26756) | 0.311s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[8](RUNID:7:1203:4886:16756, RUNID:7:1203:4886:26756) | 0.309s | passed |
| testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[9](RUNID:7:1203:5886:16756, RUNID:7:1203:4886:26756) | 0.310s | passed |
| testPathologicalOrderingAtTheSamePosition | 0.310s | passed |
| testReadSameStartPositionOrientationOverride | 0.315s | passed |
| testReadsHaveSameStartPositionButDifferentChromosomeNonEquivalence | 0.320s | passed |
| testSecondEndIsBeforeFirstInCoordinate | 0.315s | passed |
| testSingleMappedFragment | 0.312s | passed |
| testSingleMappedFragmentAndSingleMappedPair | 0.318s | passed |
| testSingleMappedFragmentAndTwoMappedPairs | 0.311s | passed |
| testSingleMappedPair | 0.311s | passed |
| testSingleUnmappedFragment | 0.311s | passed |
| testSingleUnmappedPair | 0.304s | passed |
| testStackOverFlowPairSetSwap | 0.312s | passed |
| testSupplementaryReadUnmappedMate | 0.314s | passed |
| testThreeGroupsOnDifferentChromosomesOfThreeMappedPairs | 0.320s | passed |
| testThreeMappedPairs | 0.308s | passed |
| testThreeMappedPairsWithMatchingSecondMate | 0.314s | passed |
| testTwoGroupsOnDifferentChromosomesOfThreeMappedPairs | 0.319s | passed |
| testTwoGroupsOnDifferentChromosomesOfTwoFragments | 0.315s | passed |
| testTwoGroupsOnDifferentChromosomesOfTwoMappedPairs | 0.312s | passed |
| testTwoMappedFragments | 0.319s | passed |
| testTwoMappedPairWithSamePosition | 0.309s | passed |
| testTwoMappedPairWithSamePositionDifferentStrands | 0.309s | passed |
| testTwoMappedPairWithSamePositionDifferentStrands2 | 0.316s | passed |
| testTwoMappedPairs | 0.300s | passed |
| testTwoMappedPairsAndMappedSecondaryFragment | 0.307s | passed |
| testTwoMappedPairsAndTerminalUnmappedFragment | 0.309s | passed |
| testTwoMappedPairsAndTerminalUnmappedPair | 0.303s | passed |
| testTwoMappedPairsMatesSoftClipped | 0.305s | passed |
| testTwoMappedPairsWithOppositeOrientations | 0.313s | passed |
| testTwoMappedPairsWithOppositeOrientationsNumberTwo | 0.309s | passed |
| testTwoMappedPairsWithSoftClipping | 0.304s | passed |
| testTwoMappedPairsWithSoftClippingBoth | 0.314s | passed |
| testTwoMappedPairsWithSoftClippingFirstOfPairOnly | 0.306s | passed |
| testTwoMappedPairsWithSoftClippingFirstOfPairOnlyNoMateCigar | 0.302s | passed |
| testTwoMappedPairsWithSupplementaryReads[0](true, true, true) | 0.312s | passed |
| testTwoMappedPairsWithSupplementaryReads[1](true, false, true) | 0.307s | passed |
| testTwoMappedPairsWithSupplementaryReads[2](false, true, true) | 0.308s | passed |
| testTwoMappedPairsWithSupplementaryReads[3](true, true, false) | 0.313s | passed |
| testTwoMappedPairsWithSupplementaryReads[4](true, false, false) | 0.311s | passed |
| testTwoMappedPairsWithSupplementaryReads[5](false, true, false) | 0.313s | passed |
| testTwoUnmappedFragments | 0.319s | passed |
Standard error
[October 27, 2025 at 5:47:10 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1604321280 [October 27, 2025 at 5:47:10 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1604321280 [October 27, 2025 at 5:47:11 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1604321280 [October 27, 2025 at 5:47:13 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:14 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1610612736 17:47:14.502 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/com.intel.gkl/gkl/0.9.1/e23340ee393a396807581d7af5b835c7fcf10ab3/gkl-0.9.1.jar!/com/intel/gkl/native/libgkl_compression.so 17:47:14.505 WARN NativeLibraryLoader - Unable to load libgkl_compression.so from native/libgkl_compression.so (/tmp/local16469784102307886954/libgkl_compression1498010306737873111.so: /tmp/local16469784102307886954/libgkl_compression1498010306737873111.so: cannot open shared object file: No such file or directory (Possible cause: can't load AMD 64 .so on a AARCH64 platform)) 17:47:14.506 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/com.intel.gkl/gkl/0.9.1/e23340ee393a396807581d7af5b835c7fcf10ab3/gkl-0.9.1.jar!/com/intel/gkl/native/libgkl_compression.so 17:47:14.509 WARN NativeLibraryLoader - Unable to load libgkl_compression.so from native/libgkl_compression.so (/tmp/local16469784102307886954/libgkl_compression1550010092508298772.so: /tmp/local16469784102307886954/libgkl_compression1550010092508298772.so: cannot open shared object file: No such file or directory (Possible cause: can't load AMD 64 .so on a AARCH64 platform)) 17:47:14.510 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 17:47:14.510 INFO MarkDuplicatesSpark - org.broadinstitute.hellbender.tools.spark.transforms.markduplicates vUnavailable 17:47:14.510 INFO MarkDuplicatesSpark - For support and documentation go to https://software.broadinstitute.org/gatk/ 17:47:14.510 INFO MarkDuplicatesSpark - Executing as runner@runnervmrcw8b on Linux v6.11.0-1018-azure aarch64 17:47:14.510 INFO MarkDuplicatesSpark - Java runtime: OpenJDK 64-Bit Server VM v17.0.6+10 17:47:14.510 INFO MarkDuplicatesSpark - Start Date/Time: October 27, 2025 at 5:47:14 PM UTC 17:47:14.510 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 17:47:14.510 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 17:47:14.511 INFO MarkDuplicatesSpark - HTSJDK Defaults.COMPRESSION_LEVEL : 2 17:47:14.511 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 17:47:14.511 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 17:47:14.511 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 17:47:14.511 INFO MarkDuplicatesSpark - Deflater: JdkDeflater 17:47:14.511 INFO MarkDuplicatesSpark - Inflater: JdkInflater 17:47:14.511 INFO MarkDuplicatesSpark - GCS max retries/reopens: 20 17:47:14.511 INFO MarkDuplicatesSpark - Requester pays: disabled 17:47:14.511 INFO MarkDuplicatesSpark - Initializing engine 17:47:14.511 INFO MarkDuplicatesSpark - Done initializing engine 17:47:14.511 INFO MarkDuplicatesSpark - Spark verbosity set to INFO (see --spark-verbosity argument) 17:47:14.512 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.514 INFO MemoryStore - Block broadcast_1280 stored as values in memory (estimated size 306.3 KiB, free 1903.3 MiB) 17:47:14.520 INFO MemoryStore - Block broadcast_1280_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1903.2 MiB) 17:47:14.521 INFO BlockManagerInfo - Added broadcast_1280_piece0 in memory on localhost:42685 (size: 64.4 KiB, free: 1918.9 MiB) 17:47:14.521 INFO SparkContext - Created broadcast 1280 from newAPIHadoopFile at PathSplitSource.java:96 17:47:14.539 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.541 INFO MemoryStore - Block broadcast_1281 stored as values in memory (estimated size 306.3 KiB, free 1902.9 MiB) 17:47:14.546 INFO MemoryStore - Block broadcast_1281_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1902.8 MiB) 17:47:14.547 INFO BlockManagerInfo - Added broadcast_1281_piece0 in memory on localhost:42685 (size: 64.4 KiB, free: 1918.8 MiB) 17:47:14.547 INFO SparkContext - Created broadcast 1281 from newAPIHadoopFile at PathSplitSource.java:96 17:47:14.580 INFO SparkContext - Starting job: collect at SparkUtils.java:205 17:47:14.581 INFO FileInputFormat - Total input files to process : 1 17:47:14.582 INFO DAGScheduler - Registering RDD 3309 (mapToPair at SparkUtils.java:161) as input to shuffle 74 17:47:14.582 INFO DAGScheduler - Got job 381 (collect at SparkUtils.java:205) with 1 output partitions 17:47:14.582 INFO DAGScheduler - Final stage: ResultStage 535 (collect at SparkUtils.java:205) 17:47:14.582 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 534) 17:47:14.582 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 534) 17:47:14.583 INFO DAGScheduler - Submitting ShuffleMapStage 534 (MapPartitionsRDD[3309] at mapToPair at SparkUtils.java:161), which has no missing parents 17:47:14.598 INFO MemoryStore - Block broadcast_1282 stored as values in memory (estimated size 460.4 KiB, free 1902.4 MiB) 17:47:14.600 INFO MemoryStore - Block broadcast_1282_piece0 stored as bytes in memory (estimated size 202.5 KiB, free 1902.2 MiB) 17:47:14.601 INFO BlockManagerInfo - Added broadcast_1282_piece0 in memory on localhost:42685 (size: 202.5 KiB, free: 1918.7 MiB) 17:47:14.601 INFO SparkContext - Created broadcast 1282 from broadcast at DAGScheduler.scala:1580 17:47:14.602 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 534 (MapPartitionsRDD[3309] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 17:47:14.602 INFO TaskSchedulerImpl - Adding task set 534.0 with 1 tasks resource profile 0 17:47:14.603 INFO TaskSetManager - Starting task 0.0 in stage 534.0 (TID 558) (localhost, executor driver, partition 0, PROCESS_LOCAL, 9735 bytes) 17:47:14.603 INFO Executor - Running task 0.0 in stage 534.0 (TID 558) 17:47:14.637 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/mdOrderBug.bam:0+1507 17:47:14.637 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.638 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.639 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.642 INFO Executor - Finished task 0.0 in stage 534.0 (TID 558). 1148 bytes result sent to driver 17:47:14.642 INFO TaskSetManager - Finished task 0.0 in stage 534.0 (TID 558) in 40 ms on localhost (executor driver) (1/1) 17:47:14.642 INFO TaskSchedulerImpl - Removed TaskSet 534.0, whose tasks have all completed, from pool 17:47:14.643 INFO DAGScheduler - ShuffleMapStage 534 (mapToPair at SparkUtils.java:161) finished in 0.059 s 17:47:14.643 INFO DAGScheduler - looking for newly runnable stages 17:47:14.643 INFO DAGScheduler - running: HashSet() 17:47:14.643 INFO DAGScheduler - waiting: HashSet(ResultStage 535) 17:47:14.643 INFO DAGScheduler - failed: HashSet() 17:47:14.643 INFO DAGScheduler - Submitting ResultStage 535 (MapPartitionsRDD[3312] at mapPartitions at SparkUtils.java:188), which has no missing parents 17:47:14.644 INFO MemoryStore - Block broadcast_1283 stored as values in memory (estimated size 7.4 KiB, free 1902.2 MiB) 17:47:14.644 INFO MemoryStore - Block broadcast_1283_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1902.2 MiB) 17:47:14.645 INFO BlockManagerInfo - Added broadcast_1283_piece0 in memory on localhost:42685 (size: 4.1 KiB, free: 1918.6 MiB) 17:47:14.645 INFO SparkContext - Created broadcast 1283 from broadcast at DAGScheduler.scala:1580 17:47:14.645 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 535 (MapPartitionsRDD[3312] at mapPartitions at SparkUtils.java:188) (first 15 tasks are for partitions Vector(0)) 17:47:14.645 INFO TaskSchedulerImpl - Adding task set 535.0 with 1 tasks resource profile 0 17:47:14.646 INFO TaskSetManager - Starting task 0.0 in stage 535.0 (TID 559) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 17:47:14.646 INFO Executor - Running task 0.0 in stage 535.0 (TID 559) 17:47:14.648 INFO ShuffleBlockFetcherIterator - Getting 1 (405.0 B) non-empty blocks including 1 (405.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:14.648 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:14.650 INFO Executor - Finished task 0.0 in stage 535.0 (TID 559). 2238 bytes result sent to driver 17:47:14.651 INFO TaskSetManager - Finished task 0.0 in stage 535.0 (TID 559) in 5 ms on localhost (executor driver) (1/1) 17:47:14.651 INFO TaskSchedulerImpl - Removed TaskSet 535.0, whose tasks have all completed, from pool 17:47:14.651 INFO DAGScheduler - ResultStage 535 (collect at SparkUtils.java:205) finished in 0.008 s 17:47:14.651 INFO DAGScheduler - Job 381 is finished. Cancelling potential speculative or zombie tasks for this job 17:47:14.651 INFO TaskSchedulerImpl - Killing all running tasks in stage 535: Stage finished 17:47:14.651 INFO DAGScheduler - Job 381 finished: collect at SparkUtils.java:205, took 0.071294 s 17:47:14.657 INFO MemoryStore - Block broadcast_1284 stored as values in memory (estimated size 1632.0 B, free 1902.2 MiB) 17:47:14.657 INFO MemoryStore - Block broadcast_1284_piece0 stored as bytes in memory (estimated size 145.0 B, free 1902.2 MiB) 17:47:14.657 INFO BlockManagerInfo - Added broadcast_1284_piece0 in memory on localhost:42685 (size: 145.0 B, free: 1918.6 MiB) 17:47:14.658 INFO SparkContext - Created broadcast 1284 from broadcast at MarkDuplicatesSparkUtils.java:126 17:47:14.658 INFO MemoryStore - Block broadcast_1285 stored as values in memory (estimated size 304.0 B, free 1902.2 MiB) 17:47:14.658 INFO MemoryStore - Block broadcast_1285_piece0 stored as bytes in memory (estimated size 37.0 B, free 1902.2 MiB) 17:47:14.659 INFO BlockManagerInfo - Added broadcast_1285_piece0 in memory on localhost:42685 (size: 37.0 B, free: 1918.6 MiB) 17:47:14.659 INFO SparkContext - Created broadcast 1285 from broadcast at MarkDuplicatesSparkUtils.java:127 17:47:14.672 INFO SparkContext - Starting job: collectAsMap at MarkDuplicatesSparkUtils.java:542 17:47:14.673 INFO DAGScheduler - Registering RDD 3321 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) as input to shuffle 77 17:47:14.673 INFO DAGScheduler - Registering RDD 3325 (mapToPair at MarkDuplicatesSpark.java:217) as input to shuffle 76 17:47:14.673 INFO DAGScheduler - Registering RDD 3329 (mapToPair at MarkDuplicatesSparkUtils.java:497) as input to shuffle 75 17:47:14.673 INFO DAGScheduler - Got job 382 (collectAsMap at MarkDuplicatesSparkUtils.java:542) with 1 output partitions 17:47:14.673 INFO DAGScheduler - Final stage: ResultStage 540 (collectAsMap at MarkDuplicatesSparkUtils.java:542) 17:47:14.673 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 539) 17:47:14.674 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 539) 17:47:14.674 INFO DAGScheduler - Submitting ShuffleMapStage 537 (MapPartitionsRDD[3321] at flatMapToPair at MarkDuplicatesSparkUtils.java:130), which has no missing parents 17:47:14.675 INFO MemoryStore - Block broadcast_1286 stored as values in memory (estimated size 24.2 KiB, free 1902.2 MiB) 17:47:14.676 INFO MemoryStore - Block broadcast_1286_piece0 stored as bytes in memory (estimated size 12.6 KiB, free 1902.2 MiB) 17:47:14.676 INFO BlockManagerInfo - Added broadcast_1286_piece0 in memory on localhost:42685 (size: 12.6 KiB, free: 1918.6 MiB) 17:47:14.676 INFO SparkContext - Created broadcast 1286 from broadcast at DAGScheduler.scala:1580 17:47:14.676 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 537 (MapPartitionsRDD[3321] at flatMapToPair at MarkDuplicatesSparkUtils.java:130) (first 15 tasks are for partitions Vector(0)) 17:47:14.676 INFO TaskSchedulerImpl - Adding task set 537.0 with 1 tasks resource profile 0 17:47:14.677 INFO TaskSetManager - Starting task 0.0 in stage 537.0 (TID 560) (localhost, executor driver, partition 0, NODE_LOCAL, 9890 bytes) 17:47:14.677 INFO Executor - Running task 0.0 in stage 537.0 (TID 560) 17:47:14.680 INFO ShuffleBlockFetcherIterator - Getting 1 (405.0 B) non-empty blocks including 1 (405.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:14.680 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:14.683 INFO Executor - Finished task 0.0 in stage 537.0 (TID 560). 1922 bytes result sent to driver 17:47:14.684 INFO TaskSetManager - Finished task 0.0 in stage 537.0 (TID 560) in 7 ms on localhost (executor driver) (1/1) 17:47:14.684 INFO TaskSchedulerImpl - Removed TaskSet 537.0, whose tasks have all completed, from pool 17:47:14.684 INFO DAGScheduler - ShuffleMapStage 537 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) finished in 0.010 s 17:47:14.684 INFO DAGScheduler - looking for newly runnable stages 17:47:14.684 INFO DAGScheduler - running: HashSet() 17:47:14.684 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 538, ShuffleMapStage 539, ResultStage 540) 17:47:14.684 INFO DAGScheduler - failed: HashSet() 17:47:14.684 INFO DAGScheduler - Submitting ShuffleMapStage 538 (MapPartitionsRDD[3325] at mapToPair at MarkDuplicatesSpark.java:217), which has no missing parents 17:47:14.685 INFO MemoryStore - Block broadcast_1287 stored as values in memory (estimated size 28.3 KiB, free 1902.1 MiB) 17:47:14.686 INFO MemoryStore - Block broadcast_1287_piece0 stored as bytes in memory (estimated size 14.8 KiB, free 1902.1 MiB) 17:47:14.686 INFO BlockManagerInfo - Added broadcast_1287_piece0 in memory on localhost:42685 (size: 14.8 KiB, free: 1918.6 MiB) 17:47:14.686 INFO SparkContext - Created broadcast 1287 from broadcast at DAGScheduler.scala:1580 17:47:14.687 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 538 (MapPartitionsRDD[3325] at mapToPair at MarkDuplicatesSpark.java:217) (first 15 tasks are for partitions Vector(0)) 17:47:14.687 INFO TaskSchedulerImpl - Adding task set 538.0 with 1 tasks resource profile 0 17:47:14.687 INFO TaskSetManager - Starting task 0.0 in stage 538.0 (TID 561) (localhost, executor driver, partition 0, NODE_LOCAL, 9365 bytes) 17:47:14.688 INFO Executor - Running task 0.0 in stage 538.0 (TID 561) 17:47:14.690 INFO ShuffleBlockFetcherIterator - Getting 1 (129.0 B) non-empty blocks including 1 (129.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:14.690 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:14.696 INFO Executor - Finished task 0.0 in stage 538.0 (TID 561). 1922 bytes result sent to driver 17:47:14.696 INFO TaskSetManager - Finished task 0.0 in stage 538.0 (TID 561) in 9 ms on localhost (executor driver) (1/1) 17:47:14.696 INFO TaskSchedulerImpl - Removed TaskSet 538.0, whose tasks have all completed, from pool 17:47:14.697 INFO DAGScheduler - ShuffleMapStage 538 (mapToPair at MarkDuplicatesSpark.java:217) finished in 0.011 s 17:47:14.697 INFO DAGScheduler - looking for newly runnable stages 17:47:14.697 INFO DAGScheduler - running: HashSet() 17:47:14.697 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 539, ResultStage 540) 17:47:14.697 INFO DAGScheduler - failed: HashSet() 17:47:14.697 INFO DAGScheduler - Submitting ShuffleMapStage 539 (MapPartitionsRDD[3329] at mapToPair at MarkDuplicatesSparkUtils.java:497), which has no missing parents 17:47:14.698 INFO MemoryStore - Block broadcast_1288 stored as values in memory (estimated size 19.0 KiB, free 1902.1 MiB) 17:47:14.698 INFO MemoryStore - Block broadcast_1288_piece0 stored as bytes in memory (estimated size 9.7 KiB, free 1902.1 MiB) 17:47:14.699 INFO BlockManagerInfo - Added broadcast_1288_piece0 in memory on localhost:42685 (size: 9.7 KiB, free: 1918.6 MiB) 17:47:14.699 INFO SparkContext - Created broadcast 1288 from broadcast at DAGScheduler.scala:1580 17:47:14.699 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 539 (MapPartitionsRDD[3329] at mapToPair at MarkDuplicatesSparkUtils.java:497) (first 15 tasks are for partitions Vector(0)) 17:47:14.699 INFO TaskSchedulerImpl - Adding task set 539.0 with 1 tasks resource profile 0 17:47:14.700 INFO TaskSetManager - Starting task 0.0 in stage 539.0 (TID 562) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 17:47:14.700 INFO Executor - Running task 0.0 in stage 539.0 (TID 562) 17:47:14.702 INFO ShuffleBlockFetcherIterator - Getting 1 (405.0 B) non-empty blocks including 1 (405.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:14.703 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:14.705 INFO ShuffleBlockFetcherIterator - Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:14.705 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:14.707 INFO Executor - Finished task 0.0 in stage 539.0 (TID 562). 1922 bytes result sent to driver 17:47:14.707 INFO TaskSetManager - Finished task 0.0 in stage 539.0 (TID 562) in 7 ms on localhost (executor driver) (1/1) 17:47:14.707 INFO TaskSchedulerImpl - Removed TaskSet 539.0, whose tasks have all completed, from pool 17:47:14.707 INFO DAGScheduler - ShuffleMapStage 539 (mapToPair at MarkDuplicatesSparkUtils.java:497) finished in 0.010 s 17:47:14.707 INFO DAGScheduler - looking for newly runnable stages 17:47:14.708 INFO DAGScheduler - running: HashSet() 17:47:14.708 INFO DAGScheduler - waiting: HashSet(ResultStage 540) 17:47:14.708 INFO DAGScheduler - failed: HashSet() 17:47:14.708 INFO DAGScheduler - Submitting ResultStage 540 (MapPartitionsRDD[3331] at mapValues at MarkDuplicatesSparkUtils.java:519), which has no missing parents 17:47:14.709 INFO MemoryStore - Block broadcast_1289 stored as values in memory (estimated size 20.2 KiB, free 1902.1 MiB) 17:47:14.709 INFO MemoryStore - Block broadcast_1289_piece0 stored as bytes in memory (estimated size 10.6 KiB, free 1902.1 MiB) 17:47:14.709 INFO BlockManagerInfo - Added broadcast_1289_piece0 in memory on localhost:42685 (size: 10.6 KiB, free: 1918.6 MiB) 17:47:14.710 INFO SparkContext - Created broadcast 1289 from broadcast at DAGScheduler.scala:1580 17:47:14.710 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 540 (MapPartitionsRDD[3331] at mapValues at MarkDuplicatesSparkUtils.java:519) (first 15 tasks are for partitions Vector(0)) 17:47:14.710 INFO TaskSchedulerImpl - Adding task set 540.0 with 1 tasks resource profile 0 17:47:14.711 INFO TaskSetManager - Starting task 0.0 in stage 540.0 (TID 563) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 17:47:14.711 INFO Executor - Running task 0.0 in stage 540.0 (TID 563) 17:47:14.713 INFO ShuffleBlockFetcherIterator - Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:14.713 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:14.715 INFO Executor - Finished task 0.0 in stage 540.0 (TID 563). 1944 bytes result sent to driver 17:47:14.715 INFO TaskSetManager - Finished task 0.0 in stage 540.0 (TID 563) in 5 ms on localhost (executor driver) (1/1) 17:47:14.716 INFO TaskSchedulerImpl - Removed TaskSet 540.0, whose tasks have all completed, from pool 17:47:14.716 INFO DAGScheduler - ResultStage 540 (collectAsMap at MarkDuplicatesSparkUtils.java:542) finished in 0.008 s 17:47:14.716 INFO DAGScheduler - Job 382 is finished. Cancelling potential speculative or zombie tasks for this job 17:47:14.716 INFO TaskSchedulerImpl - Killing all running tasks in stage 540: Stage finished 17:47:14.716 INFO DAGScheduler - Job 382 finished: collectAsMap at MarkDuplicatesSparkUtils.java:542, took 0.043930 s 17:47:14.727 INFO MemoryStore - Block broadcast_1290 stored as values in memory (estimated size 20.3 KiB, free 1902.0 MiB) 17:47:14.728 INFO MemoryStore - Block broadcast_1290_piece0 stored as bytes in memory (estimated size 1850.0 B, free 1902.0 MiB) 17:47:14.728 INFO BlockManagerInfo - Added broadcast_1290_piece0 in memory on localhost:42685 (size: 1850.0 B, free: 1918.6 MiB) 17:47:14.728 INFO SparkContext - Created broadcast 1290 from broadcast at ReadsSparkSink.java:133 17:47:14.729 INFO MemoryStore - Block broadcast_1291 stored as values in memory (estimated size 20.4 KiB, free 1902.0 MiB) 17:47:14.729 INFO MemoryStore - Block broadcast_1291_piece0 stored as bytes in memory (estimated size 1850.0 B, free 1902.0 MiB) 17:47:14.730 INFO BlockManagerInfo - Added broadcast_1291_piece0 in memory on localhost:42685 (size: 1850.0 B, free: 1918.6 MiB) 17:47:14.730 INFO SparkContext - Created broadcast 1291 from broadcast at BamSink.java:76 17:47:14.732 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 17:47:14.732 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 17:47:14.732 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 17:47:14.745 INFO SparkContext - Starting job: runJob at SparkHadoopWriter.scala:83 17:47:14.746 INFO DAGScheduler - Registering RDD 3333 (mapToPair at SparkUtils.java:161) as input to shuffle 78 17:47:14.746 INFO DAGScheduler - Got job 383 (runJob at SparkHadoopWriter.scala:83) with 1 output partitions 17:47:14.746 INFO DAGScheduler - Final stage: ResultStage 545 (runJob at SparkHadoopWriter.scala:83) 17:47:14.746 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 544) 17:47:14.746 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 544) 17:47:14.747 INFO DAGScheduler - Submitting ShuffleMapStage 544 (MapPartitionsRDD[3333] at mapToPair at SparkUtils.java:161), which has no missing parents 17:47:14.748 INFO MemoryStore - Block broadcast_1292 stored as values in memory (estimated size 18.1 KiB, free 1902.0 MiB) 17:47:14.748 INFO MemoryStore - Block broadcast_1292_piece0 stored as bytes in memory (estimated size 9.5 KiB, free 1902.0 MiB) 17:47:14.749 INFO BlockManagerInfo - Added broadcast_1292_piece0 in memory on localhost:42685 (size: 9.5 KiB, free: 1918.6 MiB) 17:47:14.749 INFO SparkContext - Created broadcast 1292 from broadcast at DAGScheduler.scala:1580 17:47:14.749 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 544 (MapPartitionsRDD[3333] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 17:47:14.749 INFO TaskSchedulerImpl - Adding task set 544.0 with 1 tasks resource profile 0 17:47:14.750 INFO TaskSetManager - Starting task 0.0 in stage 544.0 (TID 564) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 17:47:14.750 INFO Executor - Running task 0.0 in stage 544.0 (TID 564) 17:47:14.752 INFO ShuffleBlockFetcherIterator - Getting 1 (405.0 B) non-empty blocks including 1 (405.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:14.752 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:14.755 INFO ShuffleBlockFetcherIterator - Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:14.755 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:14.756 INFO Executor - Finished task 0.0 in stage 544.0 (TID 564). 1922 bytes result sent to driver 17:47:14.757 INFO TaskSetManager - Finished task 0.0 in stage 544.0 (TID 564) in 7 ms on localhost (executor driver) (1/1) 17:47:14.757 INFO TaskSchedulerImpl - Removed TaskSet 544.0, whose tasks have all completed, from pool 17:47:14.757 INFO DAGScheduler - ShuffleMapStage 544 (mapToPair at SparkUtils.java:161) finished in 0.010 s 17:47:14.757 INFO DAGScheduler - looking for newly runnable stages 17:47:14.757 INFO DAGScheduler - running: HashSet() 17:47:14.757 INFO DAGScheduler - waiting: HashSet(ResultStage 545) 17:47:14.757 INFO DAGScheduler - failed: HashSet() 17:47:14.757 INFO DAGScheduler - Submitting ResultStage 545 (MapPartitionsRDD[3338] at mapToPair at BamSink.java:91), which has no missing parents 17:47:14.763 INFO MemoryStore - Block broadcast_1293 stored as values in memory (estimated size 163.7 KiB, free 1901.8 MiB) 17:47:14.764 INFO MemoryStore - Block broadcast_1293_piece0 stored as bytes in memory (estimated size 73.9 KiB, free 1901.7 MiB) 17:47:14.764 INFO BlockManagerInfo - Added broadcast_1293_piece0 in memory on localhost:42685 (size: 73.9 KiB, free: 1918.5 MiB) 17:47:14.765 INFO SparkContext - Created broadcast 1293 from broadcast at DAGScheduler.scala:1580 17:47:14.765 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 545 (MapPartitionsRDD[3338] at mapToPair at BamSink.java:91) (first 15 tasks are for partitions Vector(0)) 17:47:14.765 INFO TaskSchedulerImpl - Adding task set 545.0 with 1 tasks resource profile 0 17:47:14.766 INFO TaskSetManager - Starting task 0.0 in stage 545.0 (TID 565) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 17:47:14.766 INFO Executor - Running task 0.0 in stage 545.0 (TID 565) 17:47:14.771 INFO ShuffleBlockFetcherIterator - Getting 1 (405.0 B) non-empty blocks including 1 (405.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:14.771 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:14.774 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 17:47:14.774 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 17:47:14.774 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 17:47:14.774 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 17:47:14.774 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 17:47:14.774 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 17:47:14.782 WARN IntelDeflaterFactory - Intel Deflater not supported, using Java.util.zip.Deflater 17:47:14.790 INFO FileOutputCommitter - Saved output of task 'attempt_202510271747146483692438761305369_3338_r_000000_0' to file:/tmp/local16469784102307886954/markdups16345994627619946012.bam.parts/_temporary/0/task_202510271747146483692438761305369_3338_r_000000 17:47:14.790 INFO SparkHadoopMapRedUtil - attempt_202510271747146483692438761305369_3338_r_000000_0: Committed. Elapsed time: 0 ms. 17:47:14.791 INFO Executor - Finished task 0.0 in stage 545.0 (TID 565). 1858 bytes result sent to driver 17:47:14.791 INFO TaskSetManager - Finished task 0.0 in stage 545.0 (TID 565) in 26 ms on localhost (executor driver) (1/1) 17:47:14.791 INFO TaskSchedulerImpl - Removed TaskSet 545.0, whose tasks have all completed, from pool 17:47:14.791 INFO DAGScheduler - ResultStage 545 (runJob at SparkHadoopWriter.scala:83) finished in 0.033 s 17:47:14.791 INFO DAGScheduler - Job 383 is finished. Cancelling potential speculative or zombie tasks for this job 17:47:14.791 INFO TaskSchedulerImpl - Killing all running tasks in stage 545: Stage finished 17:47:14.792 INFO DAGScheduler - Job 383 finished: runJob at SparkHadoopWriter.scala:83, took 0.046357 s 17:47:14.792 INFO SparkHadoopWriter - Start to commit write Job job_202510271747146483692438761305369_3338. 17:47:14.796 INFO SparkHadoopWriter - Write Job job_202510271747146483692438761305369_3338 committed. Elapsed time: 3 ms. 17:47:14.798 WARN IntelDeflaterFactory - Intel Deflater not supported, using Java.util.zip.Deflater 17:47:14.804 INFO HadoopFileSystemWrapper - Concatenating 3 parts to /tmp/local16469784102307886954/markdups16345994627619946012.bam 17:47:14.807 INFO HadoopFileSystemWrapper - Concatenating to /tmp/local16469784102307886954/markdups16345994627619946012.bam done 17:47:14.807 INFO IndexFileMerger - Merging .sbi files in temp directory /tmp/local16469784102307886954/markdups16345994627619946012.bam.parts/ to /tmp/local16469784102307886954/markdups16345994627619946012.bam.sbi 17:47:14.812 INFO IndexFileMerger - Done merging .sbi files 17:47:14.812 INFO IndexFileMerger - Merging .bai files in temp directory /tmp/local16469784102307886954/markdups16345994627619946012.bam.parts/ to /tmp/local16469784102307886954/markdups16345994627619946012.bam.bai 17:47:14.816 INFO IndexFileMerger - Done merging .bai files 17:47:14.816 INFO MarkDuplicatesSpark - Shutting down engine [October 27, 2025 at 5:47:14 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 17:47:14.817 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.818 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.818 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.818 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.819 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.819 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.819 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.819 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.824 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/com.intel.gkl/gkl/0.9.1/e23340ee393a396807581d7af5b835c7fcf10ab3/gkl-0.9.1.jar!/com/intel/gkl/native/libgkl_compression.so 17:47:14.827 WARN NativeLibraryLoader - Unable to load libgkl_compression.so from native/libgkl_compression.so (/tmp/local16469784102307886954/libgkl_compression3468276908418497242.so: /tmp/local16469784102307886954/libgkl_compression3468276908418497242.so: cannot open shared object file: No such file or directory (Possible cause: can't load AMD 64 .so on a AARCH64 platform)) 17:47:14.828 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/com.intel.gkl/gkl/0.9.1/e23340ee393a396807581d7af5b835c7fcf10ab3/gkl-0.9.1.jar!/com/intel/gkl/native/libgkl_compression.so 17:47:14.831 WARN NativeLibraryLoader - Unable to load libgkl_compression.so from native/libgkl_compression.so (/tmp/local16469784102307886954/libgkl_compression367710363883357676.so: /tmp/local16469784102307886954/libgkl_compression367710363883357676.so: cannot open shared object file: No such file or directory (Possible cause: can't load AMD 64 .so on a AARCH64 platform)) 17:47:14.832 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 17:47:14.832 INFO MarkDuplicatesSpark - org.broadinstitute.hellbender.tools.spark.transforms.markduplicates vUnavailable 17:47:14.832 INFO MarkDuplicatesSpark - For support and documentation go to https://software.broadinstitute.org/gatk/ 17:47:14.832 INFO MarkDuplicatesSpark - Executing as runner@runnervmrcw8b on Linux v6.11.0-1018-azure aarch64 17:47:14.832 INFO MarkDuplicatesSpark - Java runtime: OpenJDK 64-Bit Server VM v17.0.6+10 17:47:14.832 INFO MarkDuplicatesSpark - Start Date/Time: October 27, 2025 at 5:47:14 PM UTC 17:47:14.832 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 17:47:14.832 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 17:47:14.832 INFO MarkDuplicatesSpark - HTSJDK Defaults.COMPRESSION_LEVEL : 2 17:47:14.832 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 17:47:14.833 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 17:47:14.833 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 17:47:14.833 INFO MarkDuplicatesSpark - Deflater: JdkDeflater 17:47:14.833 INFO MarkDuplicatesSpark - Inflater: JdkInflater 17:47:14.833 INFO MarkDuplicatesSpark - GCS max retries/reopens: 20 17:47:14.833 INFO MarkDuplicatesSpark - Requester pays: disabled 17:47:14.833 INFO MarkDuplicatesSpark - Initializing engine 17:47:14.833 INFO MarkDuplicatesSpark - Done initializing engine 17:47:14.833 INFO MarkDuplicatesSpark - Spark verbosity set to INFO (see --spark-verbosity argument) 17:47:14.834 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.835 INFO MemoryStore - Block broadcast_1294 stored as values in memory (estimated size 306.3 KiB, free 1901.4 MiB) 17:47:14.841 INFO MemoryStore - Block broadcast_1294_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1901.4 MiB) 17:47:14.842 INFO BlockManagerInfo - Added broadcast_1294_piece0 in memory on localhost:42685 (size: 64.4 KiB, free: 1918.5 MiB) 17:47:14.842 INFO SparkContext - Created broadcast 1294 from newAPIHadoopFile at PathSplitSource.java:96 17:47:14.860 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.861 INFO MemoryStore - Block broadcast_1295 stored as values in memory (estimated size 306.3 KiB, free 1901.1 MiB) 17:47:14.867 INFO MemoryStore - Block broadcast_1295_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1901.0 MiB) 17:47:14.868 INFO BlockManagerInfo - Added broadcast_1295_piece0 in memory on localhost:42685 (size: 64.4 KiB, free: 1918.4 MiB) 17:47:14.868 INFO SparkContext - Created broadcast 1295 from newAPIHadoopFile at PathSplitSource.java:96 17:47:14.901 INFO SparkContext - Starting job: collect at SparkUtils.java:205 17:47:14.901 INFO FileInputFormat - Total input files to process : 1 17:47:14.902 INFO DAGScheduler - Registering RDD 3352 (mapToPair at SparkUtils.java:161) as input to shuffle 79 17:47:14.902 INFO DAGScheduler - Got job 384 (collect at SparkUtils.java:205) with 1 output partitions 17:47:14.902 INFO DAGScheduler - Final stage: ResultStage 547 (collect at SparkUtils.java:205) 17:47:14.902 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 546) 17:47:14.902 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 546) 17:47:14.902 INFO DAGScheduler - Submitting ShuffleMapStage 546 (MapPartitionsRDD[3352] at mapToPair at SparkUtils.java:161), which has no missing parents 17:47:14.918 INFO MemoryStore - Block broadcast_1296 stored as values in memory (estimated size 460.4 KiB, free 1900.6 MiB) 17:47:14.920 INFO MemoryStore - Block broadcast_1296_piece0 stored as bytes in memory (estimated size 202.5 KiB, free 1900.4 MiB) 17:47:14.920 INFO BlockManagerInfo - Added broadcast_1296_piece0 in memory on localhost:42685 (size: 202.5 KiB, free: 1918.2 MiB) 17:47:14.920 INFO SparkContext - Created broadcast 1296 from broadcast at DAGScheduler.scala:1580 17:47:14.920 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 546 (MapPartitionsRDD[3352] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 17:47:14.921 INFO TaskSchedulerImpl - Adding task set 546.0 with 1 tasks resource profile 0 17:47:14.921 INFO TaskSetManager - Starting task 0.0 in stage 546.0 (TID 566) (localhost, executor driver, partition 0, PROCESS_LOCAL, 9736 bytes) 17:47:14.921 INFO Executor - Running task 0.0 in stage 546.0 (TID 566) 17:47:14.955 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/mdOrderBug2.bam:0+1856 17:47:14.955 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.956 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.957 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:14.960 INFO Executor - Finished task 0.0 in stage 546.0 (TID 566). 1148 bytes result sent to driver 17:47:14.960 INFO TaskSetManager - Finished task 0.0 in stage 546.0 (TID 566) in 39 ms on localhost (executor driver) (1/1) 17:47:14.960 INFO TaskSchedulerImpl - Removed TaskSet 546.0, whose tasks have all completed, from pool 17:47:14.960 INFO DAGScheduler - ShuffleMapStage 546 (mapToPair at SparkUtils.java:161) finished in 0.057 s 17:47:14.960 INFO DAGScheduler - looking for newly runnable stages 17:47:14.961 INFO DAGScheduler - running: HashSet() 17:47:14.961 INFO DAGScheduler - waiting: HashSet(ResultStage 547) 17:47:14.961 INFO DAGScheduler - failed: HashSet() 17:47:14.961 INFO DAGScheduler - Submitting ResultStage 547 (MapPartitionsRDD[3355] at mapPartitions at SparkUtils.java:188), which has no missing parents 17:47:14.961 INFO MemoryStore - Block broadcast_1297 stored as values in memory (estimated size 7.4 KiB, free 1900.4 MiB) 17:47:14.962 INFO MemoryStore - Block broadcast_1297_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1900.4 MiB) 17:47:14.962 INFO BlockManagerInfo - Added broadcast_1297_piece0 in memory on localhost:42685 (size: 4.1 KiB, free: 1918.2 MiB) 17:47:14.962 INFO SparkContext - Created broadcast 1297 from broadcast at DAGScheduler.scala:1580 17:47:14.963 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 547 (MapPartitionsRDD[3355] at mapPartitions at SparkUtils.java:188) (first 15 tasks are for partitions Vector(0)) 17:47:14.963 INFO TaskSchedulerImpl - Adding task set 547.0 with 1 tasks resource profile 0 17:47:14.963 INFO TaskSetManager - Starting task 0.0 in stage 547.0 (TID 567) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 17:47:14.963 INFO Executor - Running task 0.0 in stage 547.0 (TID 567) 17:47:14.965 INFO ShuffleBlockFetcherIterator - Getting 1 (868.0 B) non-empty blocks including 1 (868.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:14.965 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:14.967 INFO Executor - Finished task 0.0 in stage 547.0 (TID 567). 2495 bytes result sent to driver 17:47:14.968 INFO TaskSetManager - Finished task 0.0 in stage 547.0 (TID 567) in 5 ms on localhost (executor driver) (1/1) 17:47:14.968 INFO TaskSchedulerImpl - Removed TaskSet 547.0, whose tasks have all completed, from pool 17:47:14.968 INFO DAGScheduler - ResultStage 547 (collect at SparkUtils.java:205) finished in 0.007 s 17:47:14.968 INFO DAGScheduler - Job 384 is finished. Cancelling potential speculative or zombie tasks for this job 17:47:14.968 INFO TaskSchedulerImpl - Killing all running tasks in stage 547: Stage finished 17:47:14.968 INFO DAGScheduler - Job 384 finished: collect at SparkUtils.java:205, took 0.067392 s 17:47:14.973 INFO MemoryStore - Block broadcast_1298 stored as values in memory (estimated size 1632.0 B, free 1900.4 MiB) 17:47:14.974 INFO MemoryStore - Block broadcast_1298_piece0 stored as bytes in memory (estimated size 145.0 B, free 1900.4 MiB) 17:47:14.974 INFO BlockManagerInfo - Added broadcast_1298_piece0 in memory on localhost:42685 (size: 145.0 B, free: 1918.2 MiB) 17:47:14.974 INFO SparkContext - Created broadcast 1298 from broadcast at MarkDuplicatesSparkUtils.java:126 17:47:14.975 INFO MemoryStore - Block broadcast_1299 stored as values in memory (estimated size 304.0 B, free 1900.4 MiB) 17:47:14.975 INFO MemoryStore - Block broadcast_1299_piece0 stored as bytes in memory (estimated size 37.0 B, free 1900.4 MiB) 17:47:14.975 INFO BlockManagerInfo - Added broadcast_1299_piece0 in memory on localhost:42685 (size: 37.0 B, free: 1918.2 MiB) 17:47:14.976 INFO SparkContext - Created broadcast 1299 from broadcast at MarkDuplicatesSparkUtils.java:127 17:47:14.989 INFO SparkContext - Starting job: collectAsMap at MarkDuplicatesSparkUtils.java:542 17:47:14.990 INFO DAGScheduler - Registering RDD 3364 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) as input to shuffle 82 17:47:14.990 INFO DAGScheduler - Registering RDD 3368 (mapToPair at MarkDuplicatesSpark.java:217) as input to shuffle 81 17:47:14.990 INFO DAGScheduler - Registering RDD 3372 (mapToPair at MarkDuplicatesSparkUtils.java:497) as input to shuffle 80 17:47:14.990 INFO DAGScheduler - Got job 385 (collectAsMap at MarkDuplicatesSparkUtils.java:542) with 1 output partitions 17:47:14.991 INFO DAGScheduler - Final stage: ResultStage 552 (collectAsMap at MarkDuplicatesSparkUtils.java:542) 17:47:14.991 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 551) 17:47:14.991 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 551) 17:47:14.991 INFO DAGScheduler - Submitting ShuffleMapStage 549 (MapPartitionsRDD[3364] at flatMapToPair at MarkDuplicatesSparkUtils.java:130), which has no missing parents 17:47:14.992 INFO MemoryStore - Block broadcast_1300 stored as values in memory (estimated size 24.2 KiB, free 1900.3 MiB) 17:47:14.992 INFO MemoryStore - Block broadcast_1300_piece0 stored as bytes in memory (estimated size 12.6 KiB, free 1900.3 MiB) 17:47:14.992 INFO BlockManagerInfo - Added broadcast_1300_piece0 in memory on localhost:42685 (size: 12.6 KiB, free: 1918.2 MiB) 17:47:14.993 INFO SparkContext - Created broadcast 1300 from broadcast at DAGScheduler.scala:1580 17:47:14.993 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 549 (MapPartitionsRDD[3364] at flatMapToPair at MarkDuplicatesSparkUtils.java:130) (first 15 tasks are for partitions Vector(0)) 17:47:14.993 INFO TaskSchedulerImpl - Adding task set 549.0 with 1 tasks resource profile 0 17:47:14.994 INFO TaskSetManager - Starting task 0.0 in stage 549.0 (TID 568) (localhost, executor driver, partition 0, NODE_LOCAL, 9890 bytes) 17:47:14.994 INFO Executor - Running task 0.0 in stage 549.0 (TID 568) 17:47:14.996 INFO ShuffleBlockFetcherIterator - Getting 1 (868.0 B) non-empty blocks including 1 (868.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:14.996 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:15.000 INFO Executor - Finished task 0.0 in stage 549.0 (TID 568). 1922 bytes result sent to driver 17:47:15.000 INFO TaskSetManager - Finished task 0.0 in stage 549.0 (TID 568) in 7 ms on localhost (executor driver) (1/1) 17:47:15.000 INFO TaskSchedulerImpl - Removed TaskSet 549.0, whose tasks have all completed, from pool 17:47:15.000 INFO DAGScheduler - ShuffleMapStage 549 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) finished in 0.009 s 17:47:15.000 INFO DAGScheduler - looking for newly runnable stages 17:47:15.000 INFO DAGScheduler - running: HashSet() 17:47:15.001 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 550, ShuffleMapStage 551, ResultStage 552) 17:47:15.001 INFO DAGScheduler - failed: HashSet() 17:47:15.001 INFO DAGScheduler - Submitting ShuffleMapStage 550 (MapPartitionsRDD[3368] at mapToPair at MarkDuplicatesSpark.java:217), which has no missing parents 17:47:15.002 INFO MemoryStore - Block broadcast_1301 stored as values in memory (estimated size 28.3 KiB, free 1900.3 MiB) 17:47:15.002 INFO MemoryStore - Block broadcast_1301_piece0 stored as bytes in memory (estimated size 14.8 KiB, free 1900.3 MiB) 17:47:15.002 INFO BlockManagerInfo - Added broadcast_1301_piece0 in memory on localhost:42685 (size: 14.8 KiB, free: 1918.2 MiB) 17:47:15.003 INFO SparkContext - Created broadcast 1301 from broadcast at DAGScheduler.scala:1580 17:47:15.003 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 550 (MapPartitionsRDD[3368] at mapToPair at MarkDuplicatesSpark.java:217) (first 15 tasks are for partitions Vector(0)) 17:47:15.003 INFO TaskSchedulerImpl - Adding task set 550.0 with 1 tasks resource profile 0 17:47:15.003 INFO TaskSetManager - Starting task 0.0 in stage 550.0 (TID 569) (localhost, executor driver, partition 0, NODE_LOCAL, 9365 bytes) 17:47:15.004 INFO Executor - Running task 0.0 in stage 550.0 (TID 569) 17:47:15.006 INFO ShuffleBlockFetcherIterator - Getting 1 (142.0 B) non-empty blocks including 1 (142.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:15.006 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:15.012 INFO Executor - Finished task 0.0 in stage 550.0 (TID 569). 1922 bytes result sent to driver 17:47:15.013 INFO TaskSetManager - Finished task 0.0 in stage 550.0 (TID 569) in 10 ms on localhost (executor driver) (1/1) 17:47:15.013 INFO TaskSchedulerImpl - Removed TaskSet 550.0, whose tasks have all completed, from pool 17:47:15.013 INFO DAGScheduler - ShuffleMapStage 550 (mapToPair at MarkDuplicatesSpark.java:217) finished in 0.012 s 17:47:15.013 INFO DAGScheduler - looking for newly runnable stages 17:47:15.013 INFO DAGScheduler - running: HashSet() 17:47:15.013 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 551, ResultStage 552) 17:47:15.013 INFO DAGScheduler - failed: HashSet() 17:47:15.013 INFO DAGScheduler - Submitting ShuffleMapStage 551 (MapPartitionsRDD[3372] at mapToPair at MarkDuplicatesSparkUtils.java:497), which has no missing parents 17:47:15.014 INFO MemoryStore - Block broadcast_1302 stored as values in memory (estimated size 19.0 KiB, free 1900.3 MiB) 17:47:15.015 INFO MemoryStore - Block broadcast_1302_piece0 stored as bytes in memory (estimated size 9.7 KiB, free 1900.3 MiB) 17:47:15.015 INFO BlockManagerInfo - Added broadcast_1302_piece0 in memory on localhost:42685 (size: 9.7 KiB, free: 1918.2 MiB) 17:47:15.015 INFO SparkContext - Created broadcast 1302 from broadcast at DAGScheduler.scala:1580 17:47:15.015 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 551 (MapPartitionsRDD[3372] at mapToPair at MarkDuplicatesSparkUtils.java:497) (first 15 tasks are for partitions Vector(0)) 17:47:15.016 INFO TaskSchedulerImpl - Adding task set 551.0 with 1 tasks resource profile 0 17:47:15.016 INFO TaskSetManager - Starting task 0.0 in stage 551.0 (TID 570) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 17:47:15.017 INFO Executor - Running task 0.0 in stage 551.0 (TID 570) 17:47:15.018 INFO ShuffleBlockFetcherIterator - Getting 1 (868.0 B) non-empty blocks including 1 (868.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:15.019 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:15.021 INFO ShuffleBlockFetcherIterator - Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:15.021 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:15.023 INFO Executor - Finished task 0.0 in stage 551.0 (TID 570). 1922 bytes result sent to driver 17:47:15.023 INFO TaskSetManager - Finished task 0.0 in stage 551.0 (TID 570) in 7 ms on localhost (executor driver) (1/1) 17:47:15.023 INFO TaskSchedulerImpl - Removed TaskSet 551.0, whose tasks have all completed, from pool 17:47:15.023 INFO DAGScheduler - ShuffleMapStage 551 (mapToPair at MarkDuplicatesSparkUtils.java:497) finished in 0.009 s 17:47:15.023 INFO DAGScheduler - looking for newly runnable stages 17:47:15.024 INFO DAGScheduler - running: HashSet() 17:47:15.024 INFO DAGScheduler - waiting: HashSet(ResultStage 552) 17:47:15.024 INFO DAGScheduler - failed: HashSet() 17:47:15.024 INFO DAGScheduler - Submitting ResultStage 552 (MapPartitionsRDD[3374] at mapValues at MarkDuplicatesSparkUtils.java:519), which has no missing parents 17:47:15.025 INFO MemoryStore - Block broadcast_1303 stored as values in memory (estimated size 20.2 KiB, free 1900.2 MiB) 17:47:15.025 INFO MemoryStore - Block broadcast_1303_piece0 stored as bytes in memory (estimated size 10.6 KiB, free 1900.2 MiB) 17:47:15.025 INFO BlockManagerInfo - Added broadcast_1303_piece0 in memory on localhost:42685 (size: 10.6 KiB, free: 1918.1 MiB) 17:47:15.026 INFO SparkContext - Created broadcast 1303 from broadcast at DAGScheduler.scala:1580 17:47:15.026 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 552 (MapPartitionsRDD[3374] at mapValues at MarkDuplicatesSparkUtils.java:519) (first 15 tasks are for partitions Vector(0)) 17:47:15.026 INFO TaskSchedulerImpl - Adding task set 552.0 with 1 tasks resource profile 0 17:47:15.026 INFO TaskSetManager - Starting task 0.0 in stage 552.0 (TID 571) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 17:47:15.027 INFO Executor - Running task 0.0 in stage 552.0 (TID 571) 17:47:15.029 INFO ShuffleBlockFetcherIterator - Getting 1 (142.0 B) non-empty blocks including 1 (142.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:15.029 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:15.031 INFO Executor - Finished task 0.0 in stage 552.0 (TID 571). 2053 bytes result sent to driver 17:47:15.032 INFO TaskSetManager - Finished task 0.0 in stage 552.0 (TID 571) in 6 ms on localhost (executor driver) (1/1) 17:47:15.032 INFO TaskSchedulerImpl - Removed TaskSet 552.0, whose tasks have all completed, from pool 17:47:15.032 INFO DAGScheduler - ResultStage 552 (collectAsMap at MarkDuplicatesSparkUtils.java:542) finished in 0.008 s 17:47:15.032 INFO DAGScheduler - Job 385 is finished. Cancelling potential speculative or zombie tasks for this job 17:47:15.032 INFO TaskSchedulerImpl - Killing all running tasks in stage 552: Stage finished 17:47:15.032 INFO DAGScheduler - Job 385 finished: collectAsMap at MarkDuplicatesSparkUtils.java:542, took 0.043060 s 17:47:15.044 INFO MemoryStore - Block broadcast_1304 stored as values in memory (estimated size 20.3 KiB, free 1900.2 MiB) 17:47:15.044 INFO MemoryStore - Block broadcast_1304_piece0 stored as bytes in memory (estimated size 1850.0 B, free 1900.2 MiB) 17:47:15.045 INFO BlockManagerInfo - Added broadcast_1304_piece0 in memory on localhost:42685 (size: 1850.0 B, free: 1918.1 MiB) 17:47:15.045 INFO SparkContext - Created broadcast 1304 from broadcast at ReadsSparkSink.java:133 17:47:15.046 INFO MemoryStore - Block broadcast_1305 stored as values in memory (estimated size 20.4 KiB, free 1900.2 MiB) 17:47:15.046 INFO MemoryStore - Block broadcast_1305_piece0 stored as bytes in memory (estimated size 1850.0 B, free 1900.2 MiB) 17:47:15.046 INFO BlockManagerInfo - Added broadcast_1305_piece0 in memory on localhost:42685 (size: 1850.0 B, free: 1918.1 MiB) 17:47:15.046 INFO SparkContext - Created broadcast 1305 from broadcast at BamSink.java:76 17:47:15.048 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 17:47:15.049 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 17:47:15.049 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 17:47:15.062 INFO SparkContext - Starting job: runJob at SparkHadoopWriter.scala:83 17:47:15.063 INFO DAGScheduler - Registering RDD 3376 (mapToPair at SparkUtils.java:161) as input to shuffle 83 17:47:15.063 INFO DAGScheduler - Got job 386 (runJob at SparkHadoopWriter.scala:83) with 1 output partitions 17:47:15.063 INFO DAGScheduler - Final stage: ResultStage 557 (runJob at SparkHadoopWriter.scala:83) 17:47:15.063 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 556) 17:47:15.063 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 556) 17:47:15.063 INFO DAGScheduler - Submitting ShuffleMapStage 556 (MapPartitionsRDD[3376] at mapToPair at SparkUtils.java:161), which has no missing parents 17:47:15.065 INFO MemoryStore - Block broadcast_1306 stored as values in memory (estimated size 18.1 KiB, free 1900.2 MiB) 17:47:15.065 INFO MemoryStore - Block broadcast_1306_piece0 stored as bytes in memory (estimated size 9.5 KiB, free 1900.2 MiB) 17:47:15.065 INFO BlockManagerInfo - Added broadcast_1306_piece0 in memory on localhost:42685 (size: 9.5 KiB, free: 1918.1 MiB) 17:47:15.066 INFO SparkContext - Created broadcast 1306 from broadcast at DAGScheduler.scala:1580 17:47:15.066 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 556 (MapPartitionsRDD[3376] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 17:47:15.066 INFO TaskSchedulerImpl - Adding task set 556.0 with 1 tasks resource profile 0 17:47:15.066 INFO TaskSetManager - Starting task 0.0 in stage 556.0 (TID 572) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 17:47:15.067 INFO Executor - Running task 0.0 in stage 556.0 (TID 572) 17:47:15.069 INFO ShuffleBlockFetcherIterator - Getting 1 (868.0 B) non-empty blocks including 1 (868.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:15.069 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:15.071 INFO ShuffleBlockFetcherIterator - Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:15.071 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:15.073 INFO Executor - Finished task 0.0 in stage 556.0 (TID 572). 1922 bytes result sent to driver 17:47:15.073 INFO TaskSetManager - Finished task 0.0 in stage 556.0 (TID 572) in 7 ms on localhost (executor driver) (1/1) 17:47:15.074 INFO TaskSchedulerImpl - Removed TaskSet 556.0, whose tasks have all completed, from pool 17:47:15.074 INFO DAGScheduler - ShuffleMapStage 556 (mapToPair at SparkUtils.java:161) finished in 0.010 s 17:47:15.074 INFO DAGScheduler - looking for newly runnable stages 17:47:15.074 INFO DAGScheduler - running: HashSet() 17:47:15.074 INFO DAGScheduler - waiting: HashSet(ResultStage 557) 17:47:15.074 INFO DAGScheduler - failed: HashSet() 17:47:15.074 INFO DAGScheduler - Submitting ResultStage 557 (MapPartitionsRDD[3381] at mapToPair at BamSink.java:91), which has no missing parents 17:47:15.080 INFO MemoryStore - Block broadcast_1307 stored as values in memory (estimated size 163.7 KiB, free 1900.0 MiB) 17:47:15.081 INFO MemoryStore - Block broadcast_1307_piece0 stored as bytes in memory (estimated size 73.9 KiB, free 1899.9 MiB) 17:47:15.081 INFO BlockManagerInfo - Added broadcast_1307_piece0 in memory on localhost:42685 (size: 73.9 KiB, free: 1918.1 MiB) 17:47:15.082 INFO SparkContext - Created broadcast 1307 from broadcast at DAGScheduler.scala:1580 17:47:15.082 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 557 (MapPartitionsRDD[3381] at mapToPair at BamSink.java:91) (first 15 tasks are for partitions Vector(0)) 17:47:15.082 INFO TaskSchedulerImpl - Adding task set 557.0 with 1 tasks resource profile 0 17:47:15.083 INFO TaskSetManager - Starting task 0.0 in stage 557.0 (TID 573) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 17:47:15.083 INFO Executor - Running task 0.0 in stage 557.0 (TID 573) 17:47:15.088 INFO ShuffleBlockFetcherIterator - Getting 1 (868.0 B) non-empty blocks including 1 (868.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:15.088 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:15.090 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 17:47:15.091 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 17:47:15.091 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 17:47:15.091 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 17:47:15.091 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 17:47:15.091 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 17:47:15.098 WARN IntelDeflaterFactory - Intel Deflater not supported, using Java.util.zip.Deflater 17:47:15.106 INFO FileOutputCommitter - Saved output of task 'attempt_202510271747153852673586502774694_3381_r_000000_0' to file:/tmp/local16469784102307886954/markdups3810903335753196731.bam.parts/_temporary/0/task_202510271747153852673586502774694_3381_r_000000 17:47:15.106 INFO SparkHadoopMapRedUtil - attempt_202510271747153852673586502774694_3381_r_000000_0: Committed. Elapsed time: 0 ms. 17:47:15.107 INFO Executor - Finished task 0.0 in stage 557.0 (TID 573). 1858 bytes result sent to driver 17:47:15.107 INFO TaskSetManager - Finished task 0.0 in stage 557.0 (TID 573) in 25 ms on localhost (executor driver) (1/1) 17:47:15.107 INFO TaskSchedulerImpl - Removed TaskSet 557.0, whose tasks have all completed, from pool 17:47:15.107 INFO DAGScheduler - ResultStage 557 (runJob at SparkHadoopWriter.scala:83) finished in 0.033 s 17:47:15.108 INFO DAGScheduler - Job 386 is finished. Cancelling potential speculative or zombie tasks for this job 17:47:15.108 INFO TaskSchedulerImpl - Killing all running tasks in stage 557: Stage finished 17:47:15.108 INFO DAGScheduler - Job 386 finished: runJob at SparkHadoopWriter.scala:83, took 0.045434 s 17:47:15.108 INFO SparkHadoopWriter - Start to commit write Job job_202510271747153852673586502774694_3381. 17:47:15.112 INFO SparkHadoopWriter - Write Job job_202510271747153852673586502774694_3381 committed. Elapsed time: 3 ms. 17:47:15.114 WARN IntelDeflaterFactory - Intel Deflater not supported, using Java.util.zip.Deflater 17:47:15.120 INFO HadoopFileSystemWrapper - Concatenating 3 parts to /tmp/local16469784102307886954/markdups3810903335753196731.bam 17:47:15.123 INFO HadoopFileSystemWrapper - Concatenating to /tmp/local16469784102307886954/markdups3810903335753196731.bam done 17:47:15.123 INFO IndexFileMerger - Merging .sbi files in temp directory /tmp/local16469784102307886954/markdups3810903335753196731.bam.parts/ to /tmp/local16469784102307886954/markdups3810903335753196731.bam.sbi 17:47:15.128 INFO IndexFileMerger - Done merging .sbi files 17:47:15.128 INFO IndexFileMerger - Merging .bai files in temp directory /tmp/local16469784102307886954/markdups3810903335753196731.bam.parts/ to /tmp/local16469784102307886954/markdups3810903335753196731.bam.bai 17:47:15.132 INFO IndexFileMerger - Done merging .bai files 17:47:15.132 INFO MarkDuplicatesSpark - Shutting down engine [October 27, 2025 at 5:47:15 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 17:47:15.133 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:15.133 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:15.134 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:15.134 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:15.134 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:15.134 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:15.135 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:15.135 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:15.139 WARN IntelDeflaterFactory - Intel Deflater not supported, using Java.util.zip.Deflater [October 27, 2025 at 5:47:15 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:15 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:16 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:16 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:16 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:17 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:18 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:19 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:20 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:20 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:20 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:21 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:21 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:22 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:24 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:24 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:24 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:25 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:25 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:26 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:27 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:28 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:28 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:29 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:29 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:29 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:30 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:31 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:32 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:33 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:33 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:33 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:34 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:34 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:35 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:36 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:37 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:38 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1610612736 [October 27, 2025 at 5:47:40 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:40 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 17:47:40.529 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/com.intel.gkl/gkl/0.9.1/e23340ee393a396807581d7af5b835c7fcf10ab3/gkl-0.9.1.jar!/com/intel/gkl/native/libgkl_compression.so 17:47:40.532 WARN NativeLibraryLoader - Unable to load libgkl_compression.so from native/libgkl_compression.so (/tmp/local16469784102307886954/libgkl_compression10862195711222529235.so: /tmp/local16469784102307886954/libgkl_compression10862195711222529235.so: cannot open shared object file: No such file or directory (Possible cause: can't load AMD 64 .so on a AARCH64 platform)) 17:47:40.532 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/com.intel.gkl/gkl/0.9.1/e23340ee393a396807581d7af5b835c7fcf10ab3/gkl-0.9.1.jar!/com/intel/gkl/native/libgkl_compression.so 17:47:40.535 WARN NativeLibraryLoader - Unable to load libgkl_compression.so from native/libgkl_compression.so (/tmp/local16469784102307886954/libgkl_compression4956608467055720480.so: /tmp/local16469784102307886954/libgkl_compression4956608467055720480.so: cannot open shared object file: No such file or directory (Possible cause: can't load AMD 64 .so on a AARCH64 platform)) 17:47:40.536 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 17:47:40.536 INFO MarkDuplicatesSpark - org.broadinstitute.hellbender.tools.spark.transforms.markduplicates vUnavailable 17:47:40.536 INFO MarkDuplicatesSpark - For support and documentation go to https://software.broadinstitute.org/gatk/ 17:47:40.536 INFO MarkDuplicatesSpark - Executing as runner@runnervmrcw8b on Linux v6.11.0-1018-azure aarch64 17:47:40.536 INFO MarkDuplicatesSpark - Java runtime: OpenJDK 64-Bit Server VM v17.0.6+10 17:47:40.536 INFO MarkDuplicatesSpark - Start Date/Time: October 27, 2025 at 5:47:40 PM UTC 17:47:40.536 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 17:47:40.536 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 17:47:40.536 INFO MarkDuplicatesSpark - HTSJDK Defaults.COMPRESSION_LEVEL : 2 17:47:40.536 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 17:47:40.536 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 17:47:40.536 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 17:47:40.537 INFO MarkDuplicatesSpark - Deflater: JdkDeflater 17:47:40.537 INFO MarkDuplicatesSpark - Inflater: JdkInflater 17:47:40.537 INFO MarkDuplicatesSpark - GCS max retries/reopens: 20 17:47:40.537 INFO MarkDuplicatesSpark - Requester pays: disabled 17:47:40.537 INFO MarkDuplicatesSpark - Initializing engine 17:47:40.537 INFO MarkDuplicatesSpark - Done initializing engine 17:47:40.537 INFO MarkDuplicatesSpark - Spark verbosity set to INFO (see --spark-verbosity argument) 17:47:40.538 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:40.540 INFO MemoryStore - Block broadcast_1884 stored as values in memory (estimated size 306.3 KiB, free 1916.7 MiB) 17:47:40.546 INFO MemoryStore - Block broadcast_1884_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1916.7 MiB) 17:47:40.546 INFO BlockManagerInfo - Added broadcast_1884_piece0 in memory on localhost:42685 (size: 64.4 KiB, free: 1919.2 MiB) 17:47:40.546 INFO SparkContext - Created broadcast 1884 from newAPIHadoopFile at PathSplitSource.java:96 17:47:40.564 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:40.566 INFO MemoryStore - Block broadcast_1885 stored as values in memory (estimated size 306.3 KiB, free 1916.4 MiB) 17:47:40.572 INFO MemoryStore - Block broadcast_1885_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1916.3 MiB) 17:47:40.572 INFO BlockManagerInfo - Added broadcast_1885_piece0 in memory on localhost:42685 (size: 64.4 KiB, free: 1919.2 MiB) 17:47:40.573 INFO SparkContext - Created broadcast 1885 from newAPIHadoopFile at PathSplitSource.java:96 17:47:40.605 INFO SparkContext - Starting job: collect at SparkUtils.java:205 17:47:40.606 INFO FileInputFormat - Total input files to process : 1 17:47:40.606 INFO DAGScheduler - Registering RDD 5273 (mapToPair at SparkUtils.java:161) as input to shuffle 274 17:47:40.607 INFO DAGScheduler - Got job 517 (collect at SparkUtils.java:205) with 1 output partitions 17:47:40.607 INFO DAGScheduler - Final stage: ResultStage 1019 (collect at SparkUtils.java:205) 17:47:40.607 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1018) 17:47:40.607 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1018) 17:47:40.607 INFO DAGScheduler - Submitting ShuffleMapStage 1018 (MapPartitionsRDD[5273] at mapToPair at SparkUtils.java:161), which has no missing parents 17:47:40.623 INFO MemoryStore - Block broadcast_1886 stored as values in memory (estimated size 493.8 KiB, free 1915.8 MiB) 17:47:40.625 INFO MemoryStore - Block broadcast_1886_piece0 stored as bytes in memory (estimated size 210.1 KiB, free 1915.6 MiB) 17:47:40.625 INFO BlockManagerInfo - Added broadcast_1886_piece0 in memory on localhost:42685 (size: 210.1 KiB, free: 1918.9 MiB) 17:47:40.625 INFO SparkContext - Created broadcast 1886 from broadcast at DAGScheduler.scala:1580 17:47:40.625 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1018 (MapPartitionsRDD[5273] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 17:47:40.625 INFO TaskSchedulerImpl - Adding task set 1018.0 with 1 tasks resource profile 0 17:47:40.626 INFO TaskSetManager - Starting task 0.0 in stage 1018.0 (TID 944) (localhost, executor driver, partition 0, PROCESS_LOCAL, 9758 bytes) 17:47:40.626 INFO Executor - Running task 0.0 in stage 1018.0 (TID 944) 17:47:40.661 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/example.chr1.1-1K.unmarkedDups.bam:0+13412 17:47:40.662 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:40.663 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:40.664 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:40.668 INFO Executor - Finished task 0.0 in stage 1018.0 (TID 944). 1148 bytes result sent to driver 17:47:40.668 INFO TaskSetManager - Finished task 0.0 in stage 1018.0 (TID 944) in 42 ms on localhost (executor driver) (1/1) 17:47:40.668 INFO DAGScheduler - ShuffleMapStage 1018 (mapToPair at SparkUtils.java:161) finished in 0.061 s 17:47:40.668 INFO DAGScheduler - looking for newly runnable stages 17:47:40.668 INFO DAGScheduler - running: HashSet() 17:47:40.668 INFO DAGScheduler - waiting: HashSet(ResultStage 1019) 17:47:40.668 INFO DAGScheduler - failed: HashSet() 17:47:40.669 INFO DAGScheduler - Submitting ResultStage 1019 (MapPartitionsRDD[5276] at mapPartitions at SparkUtils.java:188), which has no missing parents 17:47:40.669 INFO TaskSchedulerImpl - Removed TaskSet 1018.0, whose tasks have all completed, from pool 17:47:40.669 INFO MemoryStore - Block broadcast_1887 stored as values in memory (estimated size 7.4 KiB, free 1915.6 MiB) 17:47:40.669 INFO MemoryStore - Block broadcast_1887_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1915.6 MiB) 17:47:40.670 INFO BlockManagerInfo - Added broadcast_1887_piece0 in memory on localhost:42685 (size: 4.1 KiB, free: 1918.9 MiB) 17:47:40.670 INFO SparkContext - Created broadcast 1887 from broadcast at DAGScheduler.scala:1580 17:47:40.670 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1019 (MapPartitionsRDD[5276] at mapPartitions at SparkUtils.java:188) (first 15 tasks are for partitions Vector(0)) 17:47:40.670 INFO TaskSchedulerImpl - Adding task set 1019.0 with 1 tasks resource profile 0 17:47:40.671 INFO TaskSetManager - Starting task 0.0 in stage 1019.0 (TID 945) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 17:47:40.671 INFO Executor - Running task 0.0 in stage 1019.0 (TID 945) 17:47:40.672 INFO ShuffleBlockFetcherIterator - Getting 1 (16.3 KiB) non-empty blocks including 1 (16.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:40.672 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:40.675 INFO Executor - Finished task 0.0 in stage 1019.0 (TID 945). 2329 bytes result sent to driver 17:47:40.676 INFO TaskSetManager - Finished task 0.0 in stage 1019.0 (TID 945) in 6 ms on localhost (executor driver) (1/1) 17:47:40.676 INFO TaskSchedulerImpl - Removed TaskSet 1019.0, whose tasks have all completed, from pool 17:47:40.676 INFO DAGScheduler - ResultStage 1019 (collect at SparkUtils.java:205) finished in 0.007 s 17:47:40.676 INFO DAGScheduler - Job 517 is finished. Cancelling potential speculative or zombie tasks for this job 17:47:40.676 INFO TaskSchedulerImpl - Killing all running tasks in stage 1019: Stage finished 17:47:40.676 INFO DAGScheduler - Job 517 finished: collect at SparkUtils.java:205, took 0.070536 s 17:47:40.681 INFO MemoryStore - Block broadcast_1888 stored as values in memory (estimated size 7.8 KiB, free 1915.6 MiB) 17:47:40.681 INFO MemoryStore - Block broadcast_1888_piece0 stored as bytes in memory (estimated size 540.0 B, free 1915.6 MiB) 17:47:40.681 INFO BlockManagerInfo - Added broadcast_1888_piece0 in memory on localhost:42685 (size: 540.0 B, free: 1918.9 MiB) 17:47:40.682 INFO SparkContext - Created broadcast 1888 from broadcast at MarkDuplicatesSparkUtils.java:126 17:47:40.682 INFO MemoryStore - Block broadcast_1889 stored as values in memory (estimated size 2.6 KiB, free 1915.6 MiB) 17:47:40.682 INFO MemoryStore - Block broadcast_1889_piece0 stored as bytes in memory (estimated size 209.0 B, free 1915.6 MiB) 17:47:40.682 INFO BlockManagerInfo - Added broadcast_1889_piece0 in memory on localhost:42685 (size: 209.0 B, free: 1918.9 MiB) 17:47:40.683 INFO SparkContext - Created broadcast 1889 from broadcast at MarkDuplicatesSparkUtils.java:127 17:47:40.698 INFO SparkContext - Starting job: collectAsMap at MarkDuplicatesSparkUtils.java:542 17:47:40.698 INFO DAGScheduler - Registering RDD 5285 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) as input to shuffle 277 17:47:40.699 INFO DAGScheduler - Registering RDD 5289 (mapToPair at MarkDuplicatesSpark.java:217) as input to shuffle 276 17:47:40.699 INFO DAGScheduler - Registering RDD 5293 (mapToPair at MarkDuplicatesSparkUtils.java:497) as input to shuffle 275 17:47:40.699 INFO DAGScheduler - Got job 518 (collectAsMap at MarkDuplicatesSparkUtils.java:542) with 1 output partitions 17:47:40.699 INFO DAGScheduler - Final stage: ResultStage 1024 (collectAsMap at MarkDuplicatesSparkUtils.java:542) 17:47:40.699 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1023) 17:47:40.699 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1023) 17:47:40.699 INFO DAGScheduler - Submitting ShuffleMapStage 1021 (MapPartitionsRDD[5285] at flatMapToPair at MarkDuplicatesSparkUtils.java:130), which has no missing parents 17:47:40.700 INFO MemoryStore - Block broadcast_1890 stored as values in memory (estimated size 57.6 KiB, free 1915.6 MiB) 17:47:40.701 INFO MemoryStore - Block broadcast_1890_piece0 stored as bytes in memory (estimated size 19.8 KiB, free 1915.5 MiB) 17:47:40.701 INFO BlockManagerInfo - Added broadcast_1890_piece0 in memory on localhost:42685 (size: 19.8 KiB, free: 1918.9 MiB) 17:47:40.701 INFO SparkContext - Created broadcast 1890 from broadcast at DAGScheduler.scala:1580 17:47:40.701 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1021 (MapPartitionsRDD[5285] at flatMapToPair at MarkDuplicatesSparkUtils.java:130) (first 15 tasks are for partitions Vector(0)) 17:47:40.701 INFO TaskSchedulerImpl - Adding task set 1021.0 with 1 tasks resource profile 0 17:47:40.702 INFO TaskSetManager - Starting task 0.0 in stage 1021.0 (TID 946) (localhost, executor driver, partition 0, NODE_LOCAL, 9890 bytes) 17:47:40.702 INFO Executor - Running task 0.0 in stage 1021.0 (TID 946) 17:47:40.705 INFO ShuffleBlockFetcherIterator - Getting 1 (16.3 KiB) non-empty blocks including 1 (16.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:40.705 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:40.710 INFO Executor - Finished task 0.0 in stage 1021.0 (TID 946). 1922 bytes result sent to driver 17:47:40.710 INFO TaskSetManager - Finished task 0.0 in stage 1021.0 (TID 946) in 8 ms on localhost (executor driver) (1/1) 17:47:40.710 INFO TaskSchedulerImpl - Removed TaskSet 1021.0, whose tasks have all completed, from pool 17:47:40.710 INFO DAGScheduler - ShuffleMapStage 1021 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) finished in 0.011 s 17:47:40.710 INFO DAGScheduler - looking for newly runnable stages 17:47:40.710 INFO DAGScheduler - running: HashSet() 17:47:40.710 INFO DAGScheduler - waiting: HashSet(ResultStage 1024, ShuffleMapStage 1022, ShuffleMapStage 1023) 17:47:40.710 INFO DAGScheduler - failed: HashSet() 17:47:40.711 INFO DAGScheduler - Submitting ShuffleMapStage 1022 (MapPartitionsRDD[5289] at mapToPair at MarkDuplicatesSpark.java:217), which has no missing parents 17:47:40.712 INFO MemoryStore - Block broadcast_1891 stored as values in memory (estimated size 61.7 KiB, free 1915.5 MiB) 17:47:40.712 INFO MemoryStore - Block broadcast_1891_piece0 stored as bytes in memory (estimated size 22.0 KiB, free 1915.5 MiB) 17:47:40.712 INFO BlockManagerInfo - Added broadcast_1891_piece0 in memory on localhost:42685 (size: 22.0 KiB, free: 1918.9 MiB) 17:47:40.713 INFO SparkContext - Created broadcast 1891 from broadcast at DAGScheduler.scala:1580 17:47:40.713 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1022 (MapPartitionsRDD[5289] at mapToPair at MarkDuplicatesSpark.java:217) (first 15 tasks are for partitions Vector(0)) 17:47:40.713 INFO TaskSchedulerImpl - Adding task set 1022.0 with 1 tasks resource profile 0 17:47:40.713 INFO TaskSetManager - Starting task 0.0 in stage 1022.0 (TID 947) (localhost, executor driver, partition 0, NODE_LOCAL, 9365 bytes) 17:47:40.714 INFO Executor - Running task 0.0 in stage 1022.0 (TID 947) 17:47:40.716 INFO ShuffleBlockFetcherIterator - Getting 1 (2.0 KiB) non-empty blocks including 1 (2.0 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:40.716 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:40.724 INFO Executor - Finished task 0.0 in stage 1022.0 (TID 947). 1922 bytes result sent to driver 17:47:40.724 INFO TaskSetManager - Finished task 0.0 in stage 1022.0 (TID 947) in 11 ms on localhost (executor driver) (1/1) 17:47:40.724 INFO TaskSchedulerImpl - Removed TaskSet 1022.0, whose tasks have all completed, from pool 17:47:40.725 INFO DAGScheduler - ShuffleMapStage 1022 (mapToPair at MarkDuplicatesSpark.java:217) finished in 0.014 s 17:47:40.725 INFO DAGScheduler - looking for newly runnable stages 17:47:40.725 INFO DAGScheduler - running: HashSet() 17:47:40.725 INFO DAGScheduler - waiting: HashSet(ResultStage 1024, ShuffleMapStage 1023) 17:47:40.725 INFO DAGScheduler - failed: HashSet() 17:47:40.725 INFO DAGScheduler - Submitting ShuffleMapStage 1023 (MapPartitionsRDD[5293] at mapToPair at MarkDuplicatesSparkUtils.java:497), which has no missing parents 17:47:40.726 INFO MemoryStore - Block broadcast_1892 stored as values in memory (estimated size 52.4 KiB, free 1915.4 MiB) 17:47:40.727 INFO MemoryStore - Block broadcast_1892_piece0 stored as bytes in memory (estimated size 17.3 KiB, free 1915.4 MiB) 17:47:40.727 INFO BlockManagerInfo - Added broadcast_1892_piece0 in memory on localhost:42685 (size: 17.3 KiB, free: 1918.9 MiB) 17:47:40.727 INFO SparkContext - Created broadcast 1892 from broadcast at DAGScheduler.scala:1580 17:47:40.727 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1023 (MapPartitionsRDD[5293] at mapToPair at MarkDuplicatesSparkUtils.java:497) (first 15 tasks are for partitions Vector(0)) 17:47:40.727 INFO TaskSchedulerImpl - Adding task set 1023.0 with 1 tasks resource profile 0 17:47:40.728 INFO TaskSetManager - Starting task 0.0 in stage 1023.0 (TID 948) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 17:47:40.728 INFO Executor - Running task 0.0 in stage 1023.0 (TID 948) 17:47:40.730 INFO ShuffleBlockFetcherIterator - Getting 1 (16.3 KiB) non-empty blocks including 1 (16.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:40.730 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:40.733 INFO ShuffleBlockFetcherIterator - Getting 1 (2.7 KiB) non-empty blocks including 1 (2.7 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:40.733 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:40.735 INFO Executor - Finished task 0.0 in stage 1023.0 (TID 948). 1922 bytes result sent to driver 17:47:40.736 INFO TaskSetManager - Finished task 0.0 in stage 1023.0 (TID 948) in 8 ms on localhost (executor driver) (1/1) 17:47:40.736 INFO TaskSchedulerImpl - Removed TaskSet 1023.0, whose tasks have all completed, from pool 17:47:40.736 INFO DAGScheduler - ShuffleMapStage 1023 (mapToPair at MarkDuplicatesSparkUtils.java:497) finished in 0.011 s 17:47:40.736 INFO DAGScheduler - looking for newly runnable stages 17:47:40.736 INFO DAGScheduler - running: HashSet() 17:47:40.736 INFO DAGScheduler - waiting: HashSet(ResultStage 1024) 17:47:40.736 INFO DAGScheduler - failed: HashSet() 17:47:40.736 INFO DAGScheduler - Submitting ResultStage 1024 (MapPartitionsRDD[5295] at mapValues at MarkDuplicatesSparkUtils.java:519), which has no missing parents 17:47:40.737 INFO MemoryStore - Block broadcast_1893 stored as values in memory (estimated size 53.6 KiB, free 1915.3 MiB) 17:47:40.738 INFO MemoryStore - Block broadcast_1893_piece0 stored as bytes in memory (estimated size 18.0 KiB, free 1915.3 MiB) 17:47:40.738 INFO BlockManagerInfo - Added broadcast_1893_piece0 in memory on localhost:42685 (size: 18.0 KiB, free: 1918.9 MiB) 17:47:40.738 INFO SparkContext - Created broadcast 1893 from broadcast at DAGScheduler.scala:1580 17:47:40.738 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1024 (MapPartitionsRDD[5295] at mapValues at MarkDuplicatesSparkUtils.java:519) (first 15 tasks are for partitions Vector(0)) 17:47:40.738 INFO TaskSchedulerImpl - Adding task set 1024.0 with 1 tasks resource profile 0 17:47:40.739 INFO TaskSetManager - Starting task 0.0 in stage 1024.0 (TID 949) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 17:47:40.739 INFO Executor - Running task 0.0 in stage 1024.0 (TID 949) 17:47:40.741 INFO ShuffleBlockFetcherIterator - Getting 1 (251.0 B) non-empty blocks including 1 (251.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:40.741 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:40.743 INFO Executor - Finished task 0.0 in stage 1024.0 (TID 949). 2371 bytes result sent to driver 17:47:40.743 INFO TaskSetManager - Finished task 0.0 in stage 1024.0 (TID 949) in 4 ms on localhost (executor driver) (1/1) 17:47:40.744 INFO TaskSchedulerImpl - Removed TaskSet 1024.0, whose tasks have all completed, from pool 17:47:40.744 INFO DAGScheduler - ResultStage 1024 (collectAsMap at MarkDuplicatesSparkUtils.java:542) finished in 0.008 s 17:47:40.744 INFO DAGScheduler - Job 518 is finished. Cancelling potential speculative or zombie tasks for this job 17:47:40.744 INFO TaskSchedulerImpl - Killing all running tasks in stage 1024: Stage finished 17:47:40.744 INFO DAGScheduler - Job 518 finished: collectAsMap at MarkDuplicatesSparkUtils.java:542, took 0.046052 s 17:47:40.757 INFO MemoryStore - Block broadcast_1894 stored as values in memory (estimated size 179.4 KiB, free 1915.1 MiB) 17:47:40.757 INFO MemoryStore - Block broadcast_1894_piece0 stored as bytes in memory (estimated size 7.9 KiB, free 1915.1 MiB) 17:47:40.758 INFO BlockManagerInfo - Added broadcast_1894_piece0 in memory on localhost:42685 (size: 7.9 KiB, free: 1918.9 MiB) 17:47:40.758 INFO SparkContext - Created broadcast 1894 from broadcast at ReadsSparkSink.java:133 17:47:40.762 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 17:47:40.762 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 17:47:40.762 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 17:47:40.775 INFO SparkContext - Starting job: runJob at SparkHadoopWriter.scala:83 17:47:40.776 INFO DAGScheduler - Registering RDD 5297 (mapToPair at SparkUtils.java:161) as input to shuffle 278 17:47:40.776 INFO DAGScheduler - Got job 519 (runJob at SparkHadoopWriter.scala:83) with 1 output partitions 17:47:40.776 INFO DAGScheduler - Final stage: ResultStage 1029 (runJob at SparkHadoopWriter.scala:83) 17:47:40.776 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1028) 17:47:40.776 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1028) 17:47:40.776 INFO DAGScheduler - Submitting ShuffleMapStage 1028 (MapPartitionsRDD[5297] at mapToPair at SparkUtils.java:161), which has no missing parents 17:47:40.778 INFO MemoryStore - Block broadcast_1895 stored as values in memory (estimated size 51.5 KiB, free 1915.1 MiB) 17:47:40.778 INFO MemoryStore - Block broadcast_1895_piece0 stored as bytes in memory (estimated size 16.8 KiB, free 1915.1 MiB) 17:47:40.778 INFO BlockManagerInfo - Added broadcast_1895_piece0 in memory on localhost:42685 (size: 16.8 KiB, free: 1918.8 MiB) 17:47:40.778 INFO SparkContext - Created broadcast 1895 from broadcast at DAGScheduler.scala:1580 17:47:40.779 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1028 (MapPartitionsRDD[5297] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 17:47:40.779 INFO TaskSchedulerImpl - Adding task set 1028.0 with 1 tasks resource profile 0 17:47:40.779 INFO TaskSetManager - Starting task 0.0 in stage 1028.0 (TID 950) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 17:47:40.780 INFO Executor - Running task 0.0 in stage 1028.0 (TID 950) 17:47:40.782 INFO ShuffleBlockFetcherIterator - Getting 1 (16.3 KiB) non-empty blocks including 1 (16.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:40.782 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:40.785 INFO ShuffleBlockFetcherIterator - Getting 1 (2.7 KiB) non-empty blocks including 1 (2.7 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:40.785 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:40.787 INFO Executor - Finished task 0.0 in stage 1028.0 (TID 950). 1922 bytes result sent to driver 17:47:40.787 INFO TaskSetManager - Finished task 0.0 in stage 1028.0 (TID 950) in 8 ms on localhost (executor driver) (1/1) 17:47:40.787 INFO TaskSchedulerImpl - Removed TaskSet 1028.0, whose tasks have all completed, from pool 17:47:40.788 INFO DAGScheduler - ShuffleMapStage 1028 (mapToPair at SparkUtils.java:161) finished in 0.010 s 17:47:40.788 INFO DAGScheduler - looking for newly runnable stages 17:47:40.788 INFO DAGScheduler - running: HashSet() 17:47:40.788 INFO DAGScheduler - waiting: HashSet(ResultStage 1029) 17:47:40.788 INFO DAGScheduler - failed: HashSet() 17:47:40.788 INFO DAGScheduler - Submitting ResultStage 1029 (MapPartitionsRDD[5303] at saveAsTextFile at SamSink.java:65), which has no missing parents 17:47:40.794 INFO MemoryStore - Block broadcast_1896 stored as values in memory (estimated size 196.7 KiB, free 1914.9 MiB) 17:47:40.795 INFO MemoryStore - Block broadcast_1896_piece0 stored as bytes in memory (estimated size 80.9 KiB, free 1914.8 MiB) 17:47:40.795 INFO BlockManagerInfo - Added broadcast_1896_piece0 in memory on localhost:42685 (size: 80.9 KiB, free: 1918.8 MiB) 17:47:40.795 INFO SparkContext - Created broadcast 1896 from broadcast at DAGScheduler.scala:1580 17:47:40.796 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1029 (MapPartitionsRDD[5303] at saveAsTextFile at SamSink.java:65) (first 15 tasks are for partitions Vector(0)) 17:47:40.796 INFO TaskSchedulerImpl - Adding task set 1029.0 with 1 tasks resource profile 0 17:47:40.796 INFO TaskSetManager - Starting task 0.0 in stage 1029.0 (TID 951) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 17:47:40.796 INFO Executor - Running task 0.0 in stage 1029.0 (TID 951) 17:47:40.801 INFO ShuffleBlockFetcherIterator - Getting 1 (14.8 KiB) non-empty blocks including 1 (14.8 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:40.802 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:40.804 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 17:47:40.805 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 17:47:40.805 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 17:47:40.814 INFO FileOutputCommitter - Saved output of task 'attempt_202510271747405085486623723509395_5303_m_000000_0' to file:/tmp/MarkDups11640460351425265540/MarkDups.sam.parts/_temporary/0/task_202510271747405085486623723509395_5303_m_000000 17:47:40.814 INFO SparkHadoopMapRedUtil - attempt_202510271747405085486623723509395_5303_m_000000_0: Committed. Elapsed time: 0 ms. 17:47:40.814 INFO Executor - Finished task 0.0 in stage 1029.0 (TID 951). 1858 bytes result sent to driver 17:47:40.815 INFO TaskSetManager - Finished task 0.0 in stage 1029.0 (TID 951) in 19 ms on localhost (executor driver) (1/1) 17:47:40.815 INFO TaskSchedulerImpl - Removed TaskSet 1029.0, whose tasks have all completed, from pool 17:47:40.815 INFO DAGScheduler - ResultStage 1029 (runJob at SparkHadoopWriter.scala:83) finished in 0.027 s 17:47:40.815 INFO DAGScheduler - Job 519 is finished. Cancelling potential speculative or zombie tasks for this job 17:47:40.815 INFO TaskSchedulerImpl - Killing all running tasks in stage 1029: Stage finished 17:47:40.815 INFO DAGScheduler - Job 519 finished: runJob at SparkHadoopWriter.scala:83, took 0.039839 s 17:47:40.815 INFO SparkHadoopWriter - Start to commit write Job job_202510271747405085486623723509395_5303. 17:47:40.820 INFO SparkHadoopWriter - Write Job job_202510271747405085486623723509395_5303 committed. Elapsed time: 4 ms. 17:47:40.825 INFO HadoopFileSystemWrapper - Concatenating 2 parts to /tmp/MarkDups11640460351425265540/MarkDups.sam 17:47:40.828 INFO HadoopFileSystemWrapper - Concatenating to /tmp/MarkDups11640460351425265540/MarkDups.sam done 17:47:40.829 INFO MarkDuplicatesSpark - Shutting down engine [October 27, 2025 at 5:47:40 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1631584256 17:47:40.834 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/com.intel.gkl/gkl/0.9.1/e23340ee393a396807581d7af5b835c7fcf10ab3/gkl-0.9.1.jar!/com/intel/gkl/native/libgkl_compression.so 17:47:40.837 WARN NativeLibraryLoader - Unable to load libgkl_compression.so from native/libgkl_compression.so (/tmp/local16469784102307886954/libgkl_compression10067280118762926012.so: /tmp/local16469784102307886954/libgkl_compression10067280118762926012.so: cannot open shared object file: No such file or directory (Possible cause: can't load AMD 64 .so on a AARCH64 platform)) 17:47:40.838 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/com.intel.gkl/gkl/0.9.1/e23340ee393a396807581d7af5b835c7fcf10ab3/gkl-0.9.1.jar!/com/intel/gkl/native/libgkl_compression.so 17:47:40.840 WARN NativeLibraryLoader - Unable to load libgkl_compression.so from native/libgkl_compression.so (/tmp/local16469784102307886954/libgkl_compression8128575907999543180.so: /tmp/local16469784102307886954/libgkl_compression8128575907999543180.so: cannot open shared object file: No such file or directory (Possible cause: can't load AMD 64 .so on a AARCH64 platform)) 17:47:40.842 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 17:47:40.842 INFO MarkDuplicatesSpark - org.broadinstitute.hellbender.tools.spark.transforms.markduplicates vUnavailable 17:47:40.842 INFO MarkDuplicatesSpark - For support and documentation go to https://software.broadinstitute.org/gatk/ 17:47:40.842 INFO MarkDuplicatesSpark - Executing as runner@runnervmrcw8b on Linux v6.11.0-1018-azure aarch64 17:47:40.842 INFO MarkDuplicatesSpark - Java runtime: OpenJDK 64-Bit Server VM v17.0.6+10 17:47:40.842 INFO MarkDuplicatesSpark - Start Date/Time: October 27, 2025 at 5:47:40 PM UTC 17:47:40.842 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 17:47:40.842 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 17:47:40.842 INFO MarkDuplicatesSpark - HTSJDK Defaults.COMPRESSION_LEVEL : 2 17:47:40.842 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 17:47:40.842 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 17:47:40.842 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 17:47:40.842 INFO MarkDuplicatesSpark - Deflater: JdkDeflater 17:47:40.842 INFO MarkDuplicatesSpark - Inflater: JdkInflater 17:47:40.842 INFO MarkDuplicatesSpark - GCS max retries/reopens: 20 17:47:40.842 INFO MarkDuplicatesSpark - Requester pays: disabled 17:47:40.842 INFO MarkDuplicatesSpark - Initializing engine 17:47:40.842 INFO MarkDuplicatesSpark - Done initializing engine 17:47:40.842 INFO MarkDuplicatesSpark - Spark verbosity set to INFO (see --spark-verbosity argument) 17:47:40.843 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:40.845 INFO MemoryStore - Block broadcast_1897 stored as values in memory (estimated size 306.3 KiB, free 1914.5 MiB) 17:47:40.851 INFO MemoryStore - Block broadcast_1897_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1914.4 MiB) 17:47:40.851 INFO BlockManagerInfo - Added broadcast_1897_piece0 in memory on localhost:42685 (size: 64.4 KiB, free: 1918.7 MiB) 17:47:40.851 INFO SparkContext - Created broadcast 1897 from newAPIHadoopFile at PathSplitSource.java:96 17:47:40.869 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:40.871 INFO MemoryStore - Block broadcast_1898 stored as values in memory (estimated size 306.3 KiB, free 1914.1 MiB) 17:47:40.876 INFO MemoryStore - Block broadcast_1898_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1914.1 MiB) 17:47:40.876 INFO BlockManagerInfo - Added broadcast_1898_piece0 in memory on localhost:42685 (size: 64.4 KiB, free: 1918.6 MiB) 17:47:40.877 INFO SparkContext - Created broadcast 1898 from newAPIHadoopFile at PathSplitSource.java:96 17:47:40.908 INFO SparkContext - Starting job: collect at SparkUtils.java:205 17:47:40.909 INFO FileInputFormat - Total input files to process : 1 17:47:40.910 INFO DAGScheduler - Registering RDD 5317 (mapToPair at SparkUtils.java:161) as input to shuffle 279 17:47:40.910 INFO DAGScheduler - Got job 520 (collect at SparkUtils.java:205) with 1 output partitions 17:47:40.910 INFO DAGScheduler - Final stage: ResultStage 1031 (collect at SparkUtils.java:205) 17:47:40.910 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1030) 17:47:40.910 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1030) 17:47:40.910 INFO DAGScheduler - Submitting ShuffleMapStage 1030 (MapPartitionsRDD[5317] at mapToPair at SparkUtils.java:161), which has no missing parents 17:47:40.925 INFO MemoryStore - Block broadcast_1899 stored as values in memory (estimated size 457.4 KiB, free 1913.6 MiB) 17:47:40.927 INFO MemoryStore - Block broadcast_1899_piece0 stored as bytes in memory (estimated size 201.1 KiB, free 1913.4 MiB) 17:47:40.927 INFO BlockManagerInfo - Added broadcast_1899_piece0 in memory on localhost:42685 (size: 201.1 KiB, free: 1918.4 MiB) 17:47:40.927 INFO SparkContext - Created broadcast 1899 from broadcast at DAGScheduler.scala:1580 17:47:40.928 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1030 (MapPartitionsRDD[5317] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 17:47:40.928 INFO TaskSchedulerImpl - Adding task set 1030.0 with 1 tasks resource profile 0 17:47:40.928 INFO TaskSetManager - Starting task 0.0 in stage 1030.0 (TID 952) (localhost, executor driver, partition 0, PROCESS_LOCAL, 9738 bytes) 17:47:40.928 INFO Executor - Running task 0.0 in stage 1030.0 (TID 952) 17:47:40.962 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/optical_dupes.bam:0+678 17:47:40.963 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:40.964 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:40.964 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:40.967 INFO Executor - Finished task 0.0 in stage 1030.0 (TID 952). 1148 bytes result sent to driver 17:47:40.967 INFO TaskSetManager - Finished task 0.0 in stage 1030.0 (TID 952) in 39 ms on localhost (executor driver) (1/1) 17:47:40.967 INFO TaskSchedulerImpl - Removed TaskSet 1030.0, whose tasks have all completed, from pool 17:47:40.968 INFO DAGScheduler - ShuffleMapStage 1030 (mapToPair at SparkUtils.java:161) finished in 0.058 s 17:47:40.968 INFO DAGScheduler - looking for newly runnable stages 17:47:40.968 INFO DAGScheduler - running: HashSet() 17:47:40.968 INFO DAGScheduler - waiting: HashSet(ResultStage 1031) 17:47:40.968 INFO DAGScheduler - failed: HashSet() 17:47:40.968 INFO DAGScheduler - Submitting ResultStage 1031 (MapPartitionsRDD[5320] at mapPartitions at SparkUtils.java:188), which has no missing parents 17:47:40.968 INFO MemoryStore - Block broadcast_1900 stored as values in memory (estimated size 7.4 KiB, free 1913.4 MiB) 17:47:40.969 INFO MemoryStore - Block broadcast_1900_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1913.4 MiB) 17:47:40.969 INFO BlockManagerInfo - Added broadcast_1900_piece0 in memory on localhost:42685 (size: 4.1 KiB, free: 1918.4 MiB) 17:47:40.969 INFO SparkContext - Created broadcast 1900 from broadcast at DAGScheduler.scala:1580 17:47:40.969 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1031 (MapPartitionsRDD[5320] at mapPartitions at SparkUtils.java:188) (first 15 tasks are for partitions Vector(0)) 17:47:40.969 INFO TaskSchedulerImpl - Adding task set 1031.0 with 1 tasks resource profile 0 17:47:40.970 INFO TaskSetManager - Starting task 0.0 in stage 1031.0 (TID 953) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 17:47:40.970 INFO Executor - Running task 0.0 in stage 1031.0 (TID 953) 17:47:40.971 INFO ShuffleBlockFetcherIterator - Getting 1 (490.0 B) non-empty blocks including 1 (490.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:40.972 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:40.973 INFO Executor - Finished task 0.0 in stage 1031.0 (TID 953). 2269 bytes result sent to driver 17:47:40.974 INFO TaskSetManager - Finished task 0.0 in stage 1031.0 (TID 953) in 4 ms on localhost (executor driver) (1/1) 17:47:40.974 INFO TaskSchedulerImpl - Removed TaskSet 1031.0, whose tasks have all completed, from pool 17:47:40.974 INFO DAGScheduler - ResultStage 1031 (collect at SparkUtils.java:205) finished in 0.006 s 17:47:40.974 INFO DAGScheduler - Job 520 is finished. Cancelling potential speculative or zombie tasks for this job 17:47:40.974 INFO TaskSchedulerImpl - Killing all running tasks in stage 1031: Stage finished 17:47:40.974 INFO DAGScheduler - Job 520 finished: collect at SparkUtils.java:205, took 0.065411 s 17:47:40.979 INFO MemoryStore - Block broadcast_1901 stored as values in memory (estimated size 136.0 B, free 1913.4 MiB) 17:47:40.979 INFO MemoryStore - Block broadcast_1901_piece0 stored as bytes in memory (estimated size 24.0 B, free 1913.4 MiB) 17:47:40.979 INFO BlockManagerInfo - Added broadcast_1901_piece0 in memory on localhost:42685 (size: 24.0 B, free: 1918.4 MiB) 17:47:40.980 INFO SparkContext - Created broadcast 1901 from broadcast at MarkDuplicatesSparkUtils.java:126 17:47:40.980 INFO MemoryStore - Block broadcast_1902 stored as values in memory (estimated size 136.0 B, free 1913.4 MiB) 17:47:40.980 INFO MemoryStore - Block broadcast_1902_piece0 stored as bytes in memory (estimated size 21.0 B, free 1913.4 MiB) 17:47:40.980 INFO BlockManagerInfo - Added broadcast_1902_piece0 in memory on localhost:42685 (size: 21.0 B, free: 1918.4 MiB) 17:47:40.980 INFO SparkContext - Created broadcast 1902 from broadcast at MarkDuplicatesSparkUtils.java:127 17:47:40.993 INFO SparkContext - Starting job: collectAsMap at MarkDuplicatesSparkUtils.java:542 17:47:40.994 INFO DAGScheduler - Registering RDD 5329 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) as input to shuffle 282 17:47:40.994 INFO DAGScheduler - Registering RDD 5333 (mapToPair at MarkDuplicatesSpark.java:217) as input to shuffle 281 17:47:40.994 INFO DAGScheduler - Registering RDD 5337 (mapToPair at MarkDuplicatesSparkUtils.java:497) as input to shuffle 280 17:47:40.994 INFO DAGScheduler - Got job 521 (collectAsMap at MarkDuplicatesSparkUtils.java:542) with 1 output partitions 17:47:40.994 INFO DAGScheduler - Final stage: ResultStage 1036 (collectAsMap at MarkDuplicatesSparkUtils.java:542) 17:47:40.994 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1035) 17:47:40.994 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1035) 17:47:40.994 INFO DAGScheduler - Submitting ShuffleMapStage 1033 (MapPartitionsRDD[5329] at flatMapToPair at MarkDuplicatesSparkUtils.java:130), which has no missing parents 17:47:40.995 INFO MemoryStore - Block broadcast_1903 stored as values in memory (estimated size 21.1 KiB, free 1913.4 MiB) 17:47:40.995 INFO MemoryStore - Block broadcast_1903_piece0 stored as bytes in memory (estimated size 10.9 KiB, free 1913.4 MiB) 17:47:40.996 INFO BlockManagerInfo - Added broadcast_1903_piece0 in memory on localhost:42685 (size: 10.9 KiB, free: 1918.4 MiB) 17:47:40.996 INFO SparkContext - Created broadcast 1903 from broadcast at DAGScheduler.scala:1580 17:47:40.996 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1033 (MapPartitionsRDD[5329] at flatMapToPair at MarkDuplicatesSparkUtils.java:130) (first 15 tasks are for partitions Vector(0)) 17:47:40.996 INFO TaskSchedulerImpl - Adding task set 1033.0 with 1 tasks resource profile 0 17:47:40.996 INFO TaskSetManager - Starting task 0.0 in stage 1033.0 (TID 954) (localhost, executor driver, partition 0, NODE_LOCAL, 9890 bytes) 17:47:40.997 INFO Executor - Running task 0.0 in stage 1033.0 (TID 954) 17:47:40.999 INFO ShuffleBlockFetcherIterator - Getting 1 (490.0 B) non-empty blocks including 1 (490.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:40.999 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.002 INFO Executor - Finished task 0.0 in stage 1033.0 (TID 954). 1922 bytes result sent to driver 17:47:41.002 INFO TaskSetManager - Finished task 0.0 in stage 1033.0 (TID 954) in 6 ms on localhost (executor driver) (1/1) 17:47:41.002 INFO TaskSchedulerImpl - Removed TaskSet 1033.0, whose tasks have all completed, from pool 17:47:41.002 INFO DAGScheduler - ShuffleMapStage 1033 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) finished in 0.008 s 17:47:41.002 INFO DAGScheduler - looking for newly runnable stages 17:47:41.002 INFO DAGScheduler - running: HashSet() 17:47:41.002 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1034, ShuffleMapStage 1035, ResultStage 1036) 17:47:41.003 INFO DAGScheduler - failed: HashSet() 17:47:41.003 INFO DAGScheduler - Submitting ShuffleMapStage 1034 (MapPartitionsRDD[5333] at mapToPair at MarkDuplicatesSpark.java:217), which has no missing parents 17:47:41.003 INFO MemoryStore - Block broadcast_1904 stored as values in memory (estimated size 25.2 KiB, free 1913.4 MiB) 17:47:41.004 INFO MemoryStore - Block broadcast_1904_piece0 stored as bytes in memory (estimated size 13.1 KiB, free 1913.3 MiB) 17:47:41.004 INFO BlockManagerInfo - Added broadcast_1904_piece0 in memory on localhost:42685 (size: 13.1 KiB, free: 1918.4 MiB) 17:47:41.004 INFO SparkContext - Created broadcast 1904 from broadcast at DAGScheduler.scala:1580 17:47:41.004 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1034 (MapPartitionsRDD[5333] at mapToPair at MarkDuplicatesSpark.java:217) (first 15 tasks are for partitions Vector(0)) 17:47:41.004 INFO TaskSchedulerImpl - Adding task set 1034.0 with 1 tasks resource profile 0 17:47:41.005 INFO TaskSetManager - Starting task 0.0 in stage 1034.0 (TID 955) (localhost, executor driver, partition 0, NODE_LOCAL, 9365 bytes) 17:47:41.005 INFO Executor - Running task 0.0 in stage 1034.0 (TID 955) 17:47:41.007 INFO ShuffleBlockFetcherIterator - Getting 1 (106.0 B) non-empty blocks including 1 (106.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.007 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.013 INFO Executor - Finished task 0.0 in stage 1034.0 (TID 955). 1922 bytes result sent to driver 17:47:41.013 INFO TaskSetManager - Finished task 0.0 in stage 1034.0 (TID 955) in 8 ms on localhost (executor driver) (1/1) 17:47:41.013 INFO TaskSchedulerImpl - Removed TaskSet 1034.0, whose tasks have all completed, from pool 17:47:41.013 INFO DAGScheduler - ShuffleMapStage 1034 (mapToPair at MarkDuplicatesSpark.java:217) finished in 0.010 s 17:47:41.013 INFO DAGScheduler - looking for newly runnable stages 17:47:41.013 INFO DAGScheduler - running: HashSet() 17:47:41.013 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1035, ResultStage 1036) 17:47:41.013 INFO DAGScheduler - failed: HashSet() 17:47:41.014 INFO DAGScheduler - Submitting ShuffleMapStage 1035 (MapPartitionsRDD[5337] at mapToPair at MarkDuplicatesSparkUtils.java:497), which has no missing parents 17:47:41.014 INFO MemoryStore - Block broadcast_1905 stored as values in memory (estimated size 15.9 KiB, free 1913.3 MiB) 17:47:41.015 INFO MemoryStore - Block broadcast_1905_piece0 stored as bytes in memory (estimated size 8.3 KiB, free 1913.3 MiB) 17:47:41.015 INFO BlockManagerInfo - Added broadcast_1905_piece0 in memory on localhost:42685 (size: 8.3 KiB, free: 1918.4 MiB) 17:47:41.015 INFO SparkContext - Created broadcast 1905 from broadcast at DAGScheduler.scala:1580 17:47:41.015 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1035 (MapPartitionsRDD[5337] at mapToPair at MarkDuplicatesSparkUtils.java:497) (first 15 tasks are for partitions Vector(0)) 17:47:41.015 INFO TaskSchedulerImpl - Adding task set 1035.0 with 1 tasks resource profile 0 17:47:41.016 INFO TaskSetManager - Starting task 0.0 in stage 1035.0 (TID 956) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 17:47:41.016 INFO Executor - Running task 0.0 in stage 1035.0 (TID 956) 17:47:41.018 INFO ShuffleBlockFetcherIterator - Getting 1 (490.0 B) non-empty blocks including 1 (490.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.018 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.020 INFO ShuffleBlockFetcherIterator - Getting 1 (60.0 B) non-empty blocks including 1 (60.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.020 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.021 INFO Executor - Finished task 0.0 in stage 1035.0 (TID 956). 1922 bytes result sent to driver 17:47:41.021 INFO TaskSetManager - Finished task 0.0 in stage 1035.0 (TID 956) in 5 ms on localhost (executor driver) (1/1) 17:47:41.021 INFO TaskSchedulerImpl - Removed TaskSet 1035.0, whose tasks have all completed, from pool 17:47:41.022 INFO DAGScheduler - ShuffleMapStage 1035 (mapToPair at MarkDuplicatesSparkUtils.java:497) finished in 0.008 s 17:47:41.022 INFO DAGScheduler - looking for newly runnable stages 17:47:41.022 INFO DAGScheduler - running: HashSet() 17:47:41.022 INFO DAGScheduler - waiting: HashSet(ResultStage 1036) 17:47:41.022 INFO DAGScheduler - failed: HashSet() 17:47:41.022 INFO DAGScheduler - Submitting ResultStage 1036 (MapPartitionsRDD[5339] at mapValues at MarkDuplicatesSparkUtils.java:519), which has no missing parents 17:47:41.023 INFO MemoryStore - Block broadcast_1906 stored as values in memory (estimated size 17.1 KiB, free 1913.3 MiB) 17:47:41.023 INFO MemoryStore - Block broadcast_1906_piece0 stored as bytes in memory (estimated size 9.0 KiB, free 1913.3 MiB) 17:47:41.023 INFO BlockManagerInfo - Added broadcast_1906_piece0 in memory on localhost:42685 (size: 9.0 KiB, free: 1918.4 MiB) 17:47:41.023 INFO SparkContext - Created broadcast 1906 from broadcast at DAGScheduler.scala:1580 17:47:41.023 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1036 (MapPartitionsRDD[5339] at mapValues at MarkDuplicatesSparkUtils.java:519) (first 15 tasks are for partitions Vector(0)) 17:47:41.024 INFO TaskSchedulerImpl - Adding task set 1036.0 with 1 tasks resource profile 0 17:47:41.024 INFO TaskSetManager - Starting task 0.0 in stage 1036.0 (TID 957) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 17:47:41.024 INFO Executor - Running task 0.0 in stage 1036.0 (TID 957) 17:47:41.026 INFO ShuffleBlockFetcherIterator - Getting 1 (129.0 B) non-empty blocks including 1 (129.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.026 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.028 INFO Executor - Finished task 0.0 in stage 1036.0 (TID 957). 1930 bytes result sent to driver 17:47:41.028 INFO TaskSetManager - Finished task 0.0 in stage 1036.0 (TID 957) in 4 ms on localhost (executor driver) (1/1) 17:47:41.028 INFO TaskSchedulerImpl - Removed TaskSet 1036.0, whose tasks have all completed, from pool 17:47:41.028 INFO DAGScheduler - ResultStage 1036 (collectAsMap at MarkDuplicatesSparkUtils.java:542) finished in 0.006 s 17:47:41.028 INFO DAGScheduler - Job 521 is finished. Cancelling potential speculative or zombie tasks for this job 17:47:41.028 INFO TaskSchedulerImpl - Killing all running tasks in stage 1036: Stage finished 17:47:41.029 INFO DAGScheduler - Job 521 finished: collectAsMap at MarkDuplicatesSparkUtils.java:542, took 0.035358 s 17:47:41.039 INFO MemoryStore - Block broadcast_1907 stored as values in memory (estimated size 5.6 KiB, free 1913.3 MiB) 17:47:41.040 INFO MemoryStore - Block broadcast_1907_piece0 stored as bytes in memory (estimated size 500.0 B, free 1913.3 MiB) 17:47:41.040 INFO BlockManagerInfo - Added broadcast_1907_piece0 in memory on localhost:42685 (size: 500.0 B, free: 1918.4 MiB) 17:47:41.040 INFO SparkContext - Created broadcast 1907 from broadcast at ReadsSparkSink.java:133 17:47:41.044 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 17:47:41.044 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 17:47:41.044 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 17:47:41.057 INFO SparkContext - Starting job: runJob at SparkHadoopWriter.scala:83 17:47:41.058 INFO DAGScheduler - Registering RDD 5341 (mapToPair at SparkUtils.java:161) as input to shuffle 283 17:47:41.058 INFO DAGScheduler - Got job 522 (runJob at SparkHadoopWriter.scala:83) with 1 output partitions 17:47:41.058 INFO DAGScheduler - Final stage: ResultStage 1041 (runJob at SparkHadoopWriter.scala:83) 17:47:41.058 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1040) 17:47:41.058 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1040) 17:47:41.058 INFO DAGScheduler - Submitting ShuffleMapStage 1040 (MapPartitionsRDD[5341] at mapToPair at SparkUtils.java:161), which has no missing parents 17:47:41.059 INFO MemoryStore - Block broadcast_1908 stored as values in memory (estimated size 15.0 KiB, free 1913.3 MiB) 17:47:41.059 INFO MemoryStore - Block broadcast_1908_piece0 stored as bytes in memory (estimated size 7.9 KiB, free 1913.3 MiB) 17:47:41.060 INFO BlockManagerInfo - Added broadcast_1908_piece0 in memory on localhost:42685 (size: 7.9 KiB, free: 1918.4 MiB) 17:47:41.060 INFO SparkContext - Created broadcast 1908 from broadcast at DAGScheduler.scala:1580 17:47:41.060 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1040 (MapPartitionsRDD[5341] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 17:47:41.060 INFO TaskSchedulerImpl - Adding task set 1040.0 with 1 tasks resource profile 0 17:47:41.061 INFO TaskSetManager - Starting task 0.0 in stage 1040.0 (TID 958) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 17:47:41.061 INFO Executor - Running task 0.0 in stage 1040.0 (TID 958) 17:47:41.063 INFO ShuffleBlockFetcherIterator - Getting 1 (490.0 B) non-empty blocks including 1 (490.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.063 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.065 INFO ShuffleBlockFetcherIterator - Getting 1 (60.0 B) non-empty blocks including 1 (60.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.065 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.066 INFO Executor - Finished task 0.0 in stage 1040.0 (TID 958). 1922 bytes result sent to driver 17:47:41.066 INFO TaskSetManager - Finished task 0.0 in stage 1040.0 (TID 958) in 5 ms on localhost (executor driver) (1/1) 17:47:41.066 INFO TaskSchedulerImpl - Removed TaskSet 1040.0, whose tasks have all completed, from pool 17:47:41.067 INFO DAGScheduler - ShuffleMapStage 1040 (mapToPair at SparkUtils.java:161) finished in 0.007 s 17:47:41.067 INFO DAGScheduler - looking for newly runnable stages 17:47:41.067 INFO DAGScheduler - running: HashSet() 17:47:41.067 INFO DAGScheduler - waiting: HashSet(ResultStage 1041) 17:47:41.067 INFO DAGScheduler - failed: HashSet() 17:47:41.067 INFO DAGScheduler - Submitting ResultStage 1041 (MapPartitionsRDD[5347] at saveAsTextFile at SamSink.java:65), which has no missing parents 17:47:41.072 INFO MemoryStore - Block broadcast_1909 stored as values in memory (estimated size 160.3 KiB, free 1913.1 MiB) 17:47:41.073 INFO MemoryStore - Block broadcast_1909_piece0 stored as bytes in memory (estimated size 72.1 KiB, free 1913.0 MiB) 17:47:41.073 INFO BlockManagerInfo - Added broadcast_1909_piece0 in memory on localhost:42685 (size: 72.1 KiB, free: 1918.3 MiB) 17:47:41.074 INFO SparkContext - Created broadcast 1909 from broadcast at DAGScheduler.scala:1580 17:47:41.074 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1041 (MapPartitionsRDD[5347] at saveAsTextFile at SamSink.java:65) (first 15 tasks are for partitions Vector(0)) 17:47:41.074 INFO TaskSchedulerImpl - Adding task set 1041.0 with 1 tasks resource profile 0 17:47:41.074 INFO TaskSetManager - Starting task 0.0 in stage 1041.0 (TID 959) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 17:47:41.075 INFO Executor - Running task 0.0 in stage 1041.0 (TID 959) 17:47:41.079 INFO ShuffleBlockFetcherIterator - Getting 1 (490.0 B) non-empty blocks including 1 (490.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.080 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.082 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 17:47:41.082 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 17:47:41.082 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 17:47:41.091 INFO FileOutputCommitter - Saved output of task 'attempt_202510271747415597702476048815676_5347_m_000000_0' to file:/tmp/MarkDups18074377261370946189/MarkDups.sam.parts/_temporary/0/task_202510271747415597702476048815676_5347_m_000000 17:47:41.091 INFO SparkHadoopMapRedUtil - attempt_202510271747415597702476048815676_5347_m_000000_0: Committed. Elapsed time: 0 ms. 17:47:41.091 INFO Executor - Finished task 0.0 in stage 1041.0 (TID 959). 1858 bytes result sent to driver 17:47:41.091 INFO TaskSetManager - Finished task 0.0 in stage 1041.0 (TID 959) in 17 ms on localhost (executor driver) (1/1) 17:47:41.092 INFO TaskSchedulerImpl - Removed TaskSet 1041.0, whose tasks have all completed, from pool 17:47:41.092 INFO DAGScheduler - ResultStage 1041 (runJob at SparkHadoopWriter.scala:83) finished in 0.025 s 17:47:41.092 INFO DAGScheduler - Job 522 is finished. Cancelling potential speculative or zombie tasks for this job 17:47:41.092 INFO TaskSchedulerImpl - Killing all running tasks in stage 1041: Stage finished 17:47:41.092 INFO DAGScheduler - Job 522 finished: runJob at SparkHadoopWriter.scala:83, took 0.034483 s 17:47:41.092 INFO SparkHadoopWriter - Start to commit write Job job_202510271747415597702476048815676_5347. 17:47:41.095 INFO SparkHadoopWriter - Write Job job_202510271747415597702476048815676_5347 committed. Elapsed time: 3 ms. 17:47:41.101 INFO HadoopFileSystemWrapper - Concatenating 2 parts to /tmp/MarkDups18074377261370946189/MarkDups.sam 17:47:41.104 INFO HadoopFileSystemWrapper - Concatenating to /tmp/MarkDups18074377261370946189/MarkDups.sam done 17:47:41.104 INFO MarkDuplicatesSpark - Shutting down engine [October 27, 2025 at 5:47:41 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 17:47:41.109 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/com.intel.gkl/gkl/0.9.1/e23340ee393a396807581d7af5b835c7fcf10ab3/gkl-0.9.1.jar!/com/intel/gkl/native/libgkl_compression.so 17:47:41.112 WARN NativeLibraryLoader - Unable to load libgkl_compression.so from native/libgkl_compression.so (/tmp/local16469784102307886954/libgkl_compression6828757196602049320.so: /tmp/local16469784102307886954/libgkl_compression6828757196602049320.so: cannot open shared object file: No such file or directory (Possible cause: can't load AMD 64 .so on a AARCH64 platform)) 17:47:41.112 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/com.intel.gkl/gkl/0.9.1/e23340ee393a396807581d7af5b835c7fcf10ab3/gkl-0.9.1.jar!/com/intel/gkl/native/libgkl_compression.so 17:47:41.115 WARN NativeLibraryLoader - Unable to load libgkl_compression.so from native/libgkl_compression.so (/tmp/local16469784102307886954/libgkl_compression182282808386838094.so: /tmp/local16469784102307886954/libgkl_compression182282808386838094.so: cannot open shared object file: No such file or directory (Possible cause: can't load AMD 64 .so on a AARCH64 platform)) 17:47:41.116 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 17:47:41.116 INFO MarkDuplicatesSpark - org.broadinstitute.hellbender.tools.spark.transforms.markduplicates vUnavailable 17:47:41.116 INFO MarkDuplicatesSpark - For support and documentation go to https://software.broadinstitute.org/gatk/ 17:47:41.116 INFO MarkDuplicatesSpark - Executing as runner@runnervmrcw8b on Linux v6.11.0-1018-azure aarch64 17:47:41.116 INFO MarkDuplicatesSpark - Java runtime: OpenJDK 64-Bit Server VM v17.0.6+10 17:47:41.116 INFO MarkDuplicatesSpark - Start Date/Time: October 27, 2025 at 5:47:41 PM UTC 17:47:41.116 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 17:47:41.116 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 17:47:41.117 INFO MarkDuplicatesSpark - HTSJDK Defaults.COMPRESSION_LEVEL : 2 17:47:41.117 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 17:47:41.117 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 17:47:41.117 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 17:47:41.117 INFO MarkDuplicatesSpark - Deflater: JdkDeflater 17:47:41.117 INFO MarkDuplicatesSpark - Inflater: JdkInflater 17:47:41.117 INFO MarkDuplicatesSpark - GCS max retries/reopens: 20 17:47:41.117 INFO MarkDuplicatesSpark - Requester pays: disabled 17:47:41.117 INFO MarkDuplicatesSpark - Initializing engine 17:47:41.117 INFO MarkDuplicatesSpark - Done initializing engine 17:47:41.117 INFO MarkDuplicatesSpark - Spark verbosity set to INFO (see --spark-verbosity argument) 17:47:41.118 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:41.120 INFO MemoryStore - Block broadcast_1910 stored as values in memory (estimated size 306.3 KiB, free 1912.7 MiB) 17:47:41.125 INFO MemoryStore - Block broadcast_1910_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1912.7 MiB) 17:47:41.126 INFO BlockManagerInfo - Added broadcast_1910_piece0 in memory on localhost:42685 (size: 64.4 KiB, free: 1918.3 MiB) 17:47:41.126 INFO SparkContext - Created broadcast 1910 from newAPIHadoopFile at PathSplitSource.java:96 17:47:41.144 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:41.145 INFO MemoryStore - Block broadcast_1911 stored as values in memory (estimated size 306.3 KiB, free 1912.4 MiB) 17:47:41.151 INFO MemoryStore - Block broadcast_1911_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1912.3 MiB) 17:47:41.151 INFO BlockManagerInfo - Added broadcast_1911_piece0 in memory on localhost:42685 (size: 64.4 KiB, free: 1918.2 MiB) 17:47:41.151 INFO SparkContext - Created broadcast 1911 from newAPIHadoopFile at PathSplitSource.java:96 17:47:41.184 INFO SparkContext - Starting job: collect at SparkUtils.java:205 17:47:41.184 INFO FileInputFormat - Total input files to process : 1 17:47:41.185 INFO DAGScheduler - Registering RDD 5361 (mapToPair at SparkUtils.java:161) as input to shuffle 284 17:47:41.185 INFO DAGScheduler - Got job 523 (collect at SparkUtils.java:205) with 1 output partitions 17:47:41.185 INFO DAGScheduler - Final stage: ResultStage 1043 (collect at SparkUtils.java:205) 17:47:41.185 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1042) 17:47:41.185 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1042) 17:47:41.185 INFO DAGScheduler - Submitting ShuffleMapStage 1042 (MapPartitionsRDD[5361] at mapToPair at SparkUtils.java:161), which has no missing parents 17:47:41.200 INFO MemoryStore - Block broadcast_1912 stored as values in memory (estimated size 481.4 KiB, free 1911.8 MiB) 17:47:41.205 INFO BlockManagerInfo - Removed broadcast_1893_piece0 on localhost:42685 in memory (size: 18.0 KiB, free: 1918.2 MiB) 17:47:41.206 INFO BlockManagerInfo - Removed broadcast_1877_piece0 on localhost:42685 in memory (size: 11.2 KiB, free: 1918.2 MiB) 17:47:41.206 INFO BlockManagerInfo - Removed broadcast_1906_piece0 on localhost:42685 in memory (size: 9.0 KiB, free: 1918.2 MiB) 17:47:41.206 INFO MemoryStore - Block broadcast_1912_piece0 stored as bytes in memory (estimated size 207.4 KiB, free 1911.8 MiB) 17:47:41.206 INFO BlockManagerInfo - Removed broadcast_1876_piece0 on localhost:42685 in memory (size: 31.0 B, free: 1918.2 MiB) 17:47:41.207 INFO BlockManagerInfo - Added broadcast_1912_piece0 in memory on localhost:42685 (size: 207.4 KiB, free: 1918.0 MiB) 17:47:41.207 INFO SparkContext - Created broadcast 1912 from broadcast at DAGScheduler.scala:1580 17:47:41.207 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1042 (MapPartitionsRDD[5361] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 17:47:41.207 INFO TaskSchedulerImpl - Adding task set 1042.0 with 1 tasks resource profile 0 17:47:41.207 INFO BlockManagerInfo - Removed broadcast_1890_piece0 on localhost:42685 in memory (size: 19.8 KiB, free: 1918.0 MiB) 17:47:41.207 INFO BlockManagerInfo - Removed broadcast_1883_piece0 on localhost:42685 in memory (size: 72.1 KiB, free: 1918.1 MiB) 17:47:41.208 INFO TaskSetManager - Starting task 0.0 in stage 1042.0 (TID 960) (localhost, executor driver, partition 0, PROCESS_LOCAL, 9757 bytes) 17:47:41.208 INFO Executor - Running task 0.0 in stage 1042.0 (TID 960) 17:47:41.209 INFO BlockManagerInfo - Removed broadcast_1891_piece0 on localhost:42685 in memory (size: 22.0 KiB, free: 1918.1 MiB) 17:47:41.209 INFO BlockManagerInfo - Removed broadcast_1909_piece0 on localhost:42685 in memory (size: 72.1 KiB, free: 1918.2 MiB) 17:47:41.210 INFO BlockManagerInfo - Removed broadcast_1880_piece0 on localhost:42685 in memory (size: 9.1 KiB, free: 1918.2 MiB) 17:47:41.211 INFO BlockManagerInfo - Removed broadcast_1888_piece0 on localhost:42685 in memory (size: 540.0 B, free: 1918.2 MiB) 17:47:41.211 INFO BlockManagerInfo - Removed broadcast_1873_piece0 on localhost:42685 in memory (size: 201.2 KiB, free: 1918.4 MiB) 17:47:41.212 INFO BlockManagerInfo - Removed broadcast_1875_piece0 on localhost:42685 in memory (size: 19.0 B, free: 1918.4 MiB) 17:47:41.212 INFO BlockManagerInfo - Removed broadcast_1874_piece0 on localhost:42685 in memory (size: 4.1 KiB, free: 1918.4 MiB) 17:47:41.212 INFO BlockManagerInfo - Removed broadcast_1910_piece0 on localhost:42685 in memory (size: 64.4 KiB, free: 1918.5 MiB) 17:47:41.213 INFO BlockManagerInfo - Removed broadcast_1881_piece0 on localhost:42685 in memory (size: 663.0 B, free: 1918.5 MiB) 17:47:41.213 INFO BlockManagerInfo - Removed broadcast_1903_piece0 on localhost:42685 in memory (size: 10.9 KiB, free: 1918.5 MiB) 17:47:41.214 INFO BlockManagerInfo - Removed broadcast_1896_piece0 on localhost:42685 in memory (size: 80.9 KiB, free: 1918.6 MiB) 17:47:41.214 INFO BlockManagerInfo - Removed broadcast_1905_piece0 on localhost:42685 in memory (size: 8.3 KiB, free: 1918.6 MiB) 17:47:41.214 INFO BlockManagerInfo - Removed broadcast_1884_piece0 on localhost:42685 in memory (size: 64.4 KiB, free: 1918.6 MiB) 17:47:41.215 INFO BlockManagerInfo - Removed broadcast_1872_piece0 on localhost:42685 in memory (size: 64.3 KiB, free: 1918.7 MiB) 17:47:41.215 INFO BlockManagerInfo - Removed broadcast_1900_piece0 on localhost:42685 in memory (size: 4.1 KiB, free: 1918.7 MiB) 17:47:41.215 INFO BlockManagerInfo - Removed broadcast_1897_piece0 on localhost:42685 in memory (size: 64.4 KiB, free: 1918.8 MiB) 17:47:41.216 INFO BlockManagerInfo - Removed broadcast_1904_piece0 on localhost:42685 in memory (size: 13.1 KiB, free: 1918.8 MiB) 17:47:41.216 INFO BlockManagerInfo - Removed broadcast_1907_piece0 on localhost:42685 in memory (size: 500.0 B, free: 1918.8 MiB) 17:47:41.217 INFO BlockManagerInfo - Removed broadcast_1889_piece0 on localhost:42685 in memory (size: 209.0 B, free: 1918.8 MiB) 17:47:41.217 INFO BlockManagerInfo - Removed broadcast_1908_piece0 on localhost:42685 in memory (size: 7.9 KiB, free: 1918.8 MiB) 17:47:41.219 INFO BlockManagerInfo - Removed broadcast_1887_piece0 on localhost:42685 in memory (size: 4.1 KiB, free: 1918.8 MiB) 17:47:41.219 INFO BlockManagerInfo - Removed broadcast_1886_piece0 on localhost:42685 in memory (size: 210.1 KiB, free: 1919.0 MiB) 17:47:41.220 INFO BlockManagerInfo - Removed broadcast_1894_piece0 on localhost:42685 in memory (size: 7.9 KiB, free: 1919.0 MiB) 17:47:41.220 INFO BlockManagerInfo - Removed broadcast_1871_piece0 on localhost:42685 in memory (size: 64.3 KiB, free: 1919.1 MiB) 17:47:41.220 INFO BlockManagerInfo - Removed broadcast_1899_piece0 on localhost:42685 in memory (size: 201.1 KiB, free: 1919.3 MiB) 17:47:41.221 INFO BlockManagerInfo - Removed broadcast_1895_piece0 on localhost:42685 in memory (size: 16.8 KiB, free: 1919.3 MiB) 17:47:41.221 INFO BlockManagerInfo - Removed broadcast_1885_piece0 on localhost:42685 in memory (size: 64.4 KiB, free: 1919.4 MiB) 17:47:41.222 INFO BlockManagerInfo - Removed broadcast_1882_piece0 on localhost:42685 in memory (size: 8.0 KiB, free: 1919.4 MiB) 17:47:41.222 INFO BlockManagerInfo - Removed broadcast_1898_piece0 on localhost:42685 in memory (size: 64.4 KiB, free: 1919.4 MiB) 17:47:41.222 INFO BlockManagerInfo - Removed broadcast_1878_piece0 on localhost:42685 in memory (size: 13.2 KiB, free: 1919.4 MiB) 17:47:41.223 INFO BlockManagerInfo - Removed broadcast_1879_piece0 on localhost:42685 in memory (size: 8.5 KiB, free: 1919.4 MiB) 17:47:41.224 INFO BlockManagerInfo - Removed broadcast_1901_piece0 on localhost:42685 in memory (size: 24.0 B, free: 1919.4 MiB) 17:47:41.225 INFO BlockManagerInfo - Removed broadcast_1892_piece0 on localhost:42685 in memory (size: 17.3 KiB, free: 1919.5 MiB) 17:47:41.225 INFO BlockManagerInfo - Removed broadcast_1902_piece0 on localhost:42685 in memory (size: 21.0 B, free: 1919.5 MiB) 17:47:41.243 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/inputSingleLibrarySolexa16404.bam:0+5416 17:47:41.243 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:41.244 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:41.246 WARN IntelInflaterFactory - IntelInflater is not supported, using Java.util.zip.Inflater 17:47:41.249 INFO Executor - Finished task 0.0 in stage 1042.0 (TID 960). 1148 bytes result sent to driver 17:47:41.250 INFO TaskSetManager - Finished task 0.0 in stage 1042.0 (TID 960) in 42 ms on localhost (executor driver) (1/1) 17:47:41.250 INFO TaskSchedulerImpl - Removed TaskSet 1042.0, whose tasks have all completed, from pool 17:47:41.250 INFO DAGScheduler - ShuffleMapStage 1042 (mapToPair at SparkUtils.java:161) finished in 0.065 s 17:47:41.250 INFO DAGScheduler - looking for newly runnable stages 17:47:41.250 INFO DAGScheduler - running: HashSet() 17:47:41.250 INFO DAGScheduler - waiting: HashSet(ResultStage 1043) 17:47:41.250 INFO DAGScheduler - failed: HashSet() 17:47:41.250 INFO DAGScheduler - Submitting ResultStage 1043 (MapPartitionsRDD[5364] at mapPartitions at SparkUtils.java:188), which has no missing parents 17:47:41.251 INFO MemoryStore - Block broadcast_1913 stored as values in memory (estimated size 7.4 KiB, free 1917.8 MiB) 17:47:41.251 INFO MemoryStore - Block broadcast_1913_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1917.8 MiB) 17:47:41.251 INFO BlockManagerInfo - Added broadcast_1913_piece0 in memory on localhost:42685 (size: 4.1 KiB, free: 1919.5 MiB) 17:47:41.251 INFO SparkContext - Created broadcast 1913 from broadcast at DAGScheduler.scala:1580 17:47:41.251 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1043 (MapPartitionsRDD[5364] at mapPartitions at SparkUtils.java:188) (first 15 tasks are for partitions Vector(0)) 17:47:41.251 INFO TaskSchedulerImpl - Adding task set 1043.0 with 1 tasks resource profile 0 17:47:41.252 INFO TaskSetManager - Starting task 0.0 in stage 1043.0 (TID 961) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 17:47:41.252 INFO Executor - Running task 0.0 in stage 1043.0 (TID 961) 17:47:41.253 INFO ShuffleBlockFetcherIterator - Getting 1 (4.3 KiB) non-empty blocks including 1 (4.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.254 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.256 INFO Executor - Finished task 0.0 in stage 1043.0 (TID 961). 2039 bytes result sent to driver 17:47:41.256 INFO TaskSetManager - Finished task 0.0 in stage 1043.0 (TID 961) in 4 ms on localhost (executor driver) (1/1) 17:47:41.256 INFO TaskSchedulerImpl - Removed TaskSet 1043.0, whose tasks have all completed, from pool 17:47:41.256 INFO DAGScheduler - ResultStage 1043 (collect at SparkUtils.java:205) finished in 0.006 s 17:47:41.256 INFO DAGScheduler - Job 523 is finished. Cancelling potential speculative or zombie tasks for this job 17:47:41.256 INFO TaskSchedulerImpl - Killing all running tasks in stage 1043: Stage finished 17:47:41.256 INFO DAGScheduler - Job 523 finished: collect at SparkUtils.java:205, took 0.072568 s 17:47:41.261 INFO MemoryStore - Block broadcast_1914 stored as values in memory (estimated size 392.0 B, free 1917.8 MiB) 17:47:41.261 INFO MemoryStore - Block broadcast_1914_piece0 stored as bytes in memory (estimated size 33.0 B, free 1917.8 MiB) 17:47:41.262 INFO BlockManagerInfo - Added broadcast_1914_piece0 in memory on localhost:42685 (size: 33.0 B, free: 1919.5 MiB) 17:47:41.262 INFO SparkContext - Created broadcast 1914 from broadcast at MarkDuplicatesSparkUtils.java:126 17:47:41.262 INFO MemoryStore - Block broadcast_1915 stored as values in memory (estimated size 144.0 B, free 1917.8 MiB) 17:47:41.262 INFO MemoryStore - Block broadcast_1915_piece0 stored as bytes in memory (estimated size 28.0 B, free 1917.8 MiB) 17:47:41.262 INFO BlockManagerInfo - Added broadcast_1915_piece0 in memory on localhost:42685 (size: 28.0 B, free: 1919.5 MiB) 17:47:41.263 INFO SparkContext - Created broadcast 1915 from broadcast at MarkDuplicatesSparkUtils.java:127 17:47:41.276 INFO SparkContext - Starting job: collectAsMap at MarkDuplicatesSparkUtils.java:542 17:47:41.277 INFO DAGScheduler - Registering RDD 5373 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) as input to shuffle 287 17:47:41.277 INFO DAGScheduler - Registering RDD 5377 (mapToPair at MarkDuplicatesSpark.java:217) as input to shuffle 286 17:47:41.277 INFO DAGScheduler - Registering RDD 5381 (mapToPair at MarkDuplicatesSparkUtils.java:497) as input to shuffle 285 17:47:41.277 INFO DAGScheduler - Got job 524 (collectAsMap at MarkDuplicatesSparkUtils.java:542) with 1 output partitions 17:47:41.277 INFO DAGScheduler - Final stage: ResultStage 1048 (collectAsMap at MarkDuplicatesSparkUtils.java:542) 17:47:41.277 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1047) 17:47:41.277 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1047) 17:47:41.277 INFO DAGScheduler - Submitting ShuffleMapStage 1045 (MapPartitionsRDD[5373] at flatMapToPair at MarkDuplicatesSparkUtils.java:130), which has no missing parents 17:47:41.278 INFO MemoryStore - Block broadcast_1916 stored as values in memory (estimated size 45.2 KiB, free 1917.7 MiB) 17:47:41.279 INFO MemoryStore - Block broadcast_1916_piece0 stored as bytes in memory (estimated size 17.1 KiB, free 1917.7 MiB) 17:47:41.279 INFO BlockManagerInfo - Added broadcast_1916_piece0 in memory on localhost:42685 (size: 17.1 KiB, free: 1919.4 MiB) 17:47:41.279 INFO SparkContext - Created broadcast 1916 from broadcast at DAGScheduler.scala:1580 17:47:41.279 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1045 (MapPartitionsRDD[5373] at flatMapToPair at MarkDuplicatesSparkUtils.java:130) (first 15 tasks are for partitions Vector(0)) 17:47:41.279 INFO TaskSchedulerImpl - Adding task set 1045.0 with 1 tasks resource profile 0 17:47:41.280 INFO TaskSetManager - Starting task 0.0 in stage 1045.0 (TID 962) (localhost, executor driver, partition 0, NODE_LOCAL, 9890 bytes) 17:47:41.280 INFO Executor - Running task 0.0 in stage 1045.0 (TID 962) 17:47:41.282 INFO ShuffleBlockFetcherIterator - Getting 1 (4.3 KiB) non-empty blocks including 1 (4.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.282 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.286 INFO Executor - Finished task 0.0 in stage 1045.0 (TID 962). 1922 bytes result sent to driver 17:47:41.286 INFO TaskSetManager - Finished task 0.0 in stage 1045.0 (TID 962) in 6 ms on localhost (executor driver) (1/1) 17:47:41.286 INFO TaskSchedulerImpl - Removed TaskSet 1045.0, whose tasks have all completed, from pool 17:47:41.286 INFO DAGScheduler - ShuffleMapStage 1045 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) finished in 0.009 s 17:47:41.286 INFO DAGScheduler - looking for newly runnable stages 17:47:41.286 INFO DAGScheduler - running: HashSet() 17:47:41.286 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1046, ShuffleMapStage 1047, ResultStage 1048) 17:47:41.286 INFO DAGScheduler - failed: HashSet() 17:47:41.286 INFO DAGScheduler - Submitting ShuffleMapStage 1046 (MapPartitionsRDD[5377] at mapToPair at MarkDuplicatesSpark.java:217), which has no missing parents 17:47:41.287 INFO MemoryStore - Block broadcast_1917 stored as values in memory (estimated size 49.3 KiB, free 1917.6 MiB) 17:47:41.288 INFO MemoryStore - Block broadcast_1917_piece0 stored as bytes in memory (estimated size 19.3 KiB, free 1917.6 MiB) 17:47:41.288 INFO BlockManagerInfo - Added broadcast_1917_piece0 in memory on localhost:42685 (size: 19.3 KiB, free: 1919.4 MiB) 17:47:41.288 INFO SparkContext - Created broadcast 1917 from broadcast at DAGScheduler.scala:1580 17:47:41.288 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1046 (MapPartitionsRDD[5377] at mapToPair at MarkDuplicatesSpark.java:217) (first 15 tasks are for partitions Vector(0)) 17:47:41.288 INFO TaskSchedulerImpl - Adding task set 1046.0 with 1 tasks resource profile 0 17:47:41.289 INFO TaskSetManager - Starting task 0.0 in stage 1046.0 (TID 963) (localhost, executor driver, partition 0, NODE_LOCAL, 9365 bytes) 17:47:41.289 INFO Executor - Running task 0.0 in stage 1046.0 (TID 963) 17:47:41.291 INFO ShuffleBlockFetcherIterator - Getting 1 (593.0 B) non-empty blocks including 1 (593.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.291 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.298 INFO Executor - Finished task 0.0 in stage 1046.0 (TID 963). 1922 bytes result sent to driver 17:47:41.298 INFO TaskSetManager - Finished task 0.0 in stage 1046.0 (TID 963) in 9 ms on localhost (executor driver) (1/1) 17:47:41.299 INFO TaskSchedulerImpl - Removed TaskSet 1046.0, whose tasks have all completed, from pool 17:47:41.299 INFO DAGScheduler - ShuffleMapStage 1046 (mapToPair at MarkDuplicatesSpark.java:217) finished in 0.012 s 17:47:41.299 INFO DAGScheduler - looking for newly runnable stages 17:47:41.299 INFO DAGScheduler - running: HashSet() 17:47:41.299 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1047, ResultStage 1048) 17:47:41.299 INFO DAGScheduler - failed: HashSet() 17:47:41.299 INFO DAGScheduler - Submitting ShuffleMapStage 1047 (MapPartitionsRDD[5381] at mapToPair at MarkDuplicatesSparkUtils.java:497), which has no missing parents 17:47:41.300 INFO MemoryStore - Block broadcast_1918 stored as values in memory (estimated size 40.0 KiB, free 1917.6 MiB) 17:47:41.300 INFO MemoryStore - Block broadcast_1918_piece0 stored as bytes in memory (estimated size 14.5 KiB, free 1917.6 MiB) 17:47:41.300 INFO BlockManagerInfo - Added broadcast_1918_piece0 in memory on localhost:42685 (size: 14.5 KiB, free: 1919.4 MiB) 17:47:41.301 INFO SparkContext - Created broadcast 1918 from broadcast at DAGScheduler.scala:1580 17:47:41.301 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1047 (MapPartitionsRDD[5381] at mapToPair at MarkDuplicatesSparkUtils.java:497) (first 15 tasks are for partitions Vector(0)) 17:47:41.301 INFO TaskSchedulerImpl - Adding task set 1047.0 with 1 tasks resource profile 0 17:47:41.301 INFO TaskSetManager - Starting task 0.0 in stage 1047.0 (TID 964) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 17:47:41.302 INFO Executor - Running task 0.0 in stage 1047.0 (TID 964) 17:47:41.303 INFO ShuffleBlockFetcherIterator - Getting 1 (4.3 KiB) non-empty blocks including 1 (4.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.303 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.305 INFO ShuffleBlockFetcherIterator - Getting 1 (652.0 B) non-empty blocks including 1 (652.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.306 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.307 INFO Executor - Finished task 0.0 in stage 1047.0 (TID 964). 1922 bytes result sent to driver 17:47:41.307 INFO TaskSetManager - Finished task 0.0 in stage 1047.0 (TID 964) in 6 ms on localhost (executor driver) (1/1) 17:47:41.307 INFO TaskSchedulerImpl - Removed TaskSet 1047.0, whose tasks have all completed, from pool 17:47:41.307 INFO DAGScheduler - ShuffleMapStage 1047 (mapToPair at MarkDuplicatesSparkUtils.java:497) finished in 0.008 s 17:47:41.307 INFO DAGScheduler - looking for newly runnable stages 17:47:41.307 INFO DAGScheduler - running: HashSet() 17:47:41.308 INFO DAGScheduler - waiting: HashSet(ResultStage 1048) 17:47:41.308 INFO DAGScheduler - failed: HashSet() 17:47:41.308 INFO DAGScheduler - Submitting ResultStage 1048 (MapPartitionsRDD[5383] at mapValues at MarkDuplicatesSparkUtils.java:519), which has no missing parents 17:47:41.308 INFO MemoryStore - Block broadcast_1919 stored as values in memory (estimated size 41.2 KiB, free 1917.5 MiB) 17:47:41.309 INFO MemoryStore - Block broadcast_1919_piece0 stored as bytes in memory (estimated size 15.3 KiB, free 1917.5 MiB) 17:47:41.309 INFO BlockManagerInfo - Added broadcast_1919_piece0 in memory on localhost:42685 (size: 15.3 KiB, free: 1919.4 MiB) 17:47:41.309 INFO SparkContext - Created broadcast 1919 from broadcast at DAGScheduler.scala:1580 17:47:41.309 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1048 (MapPartitionsRDD[5383] at mapValues at MarkDuplicatesSparkUtils.java:519) (first 15 tasks are for partitions Vector(0)) 17:47:41.309 INFO TaskSchedulerImpl - Adding task set 1048.0 with 1 tasks resource profile 0 17:47:41.310 INFO TaskSetManager - Starting task 0.0 in stage 1048.0 (TID 965) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 17:47:41.310 INFO Executor - Running task 0.0 in stage 1048.0 (TID 965) 17:47:41.312 INFO ShuffleBlockFetcherIterator - Getting 1 (142.0 B) non-empty blocks including 1 (142.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.312 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.314 INFO Executor - Finished task 0.0 in stage 1048.0 (TID 965). 1944 bytes result sent to driver 17:47:41.314 INFO TaskSetManager - Finished task 0.0 in stage 1048.0 (TID 965) in 4 ms on localhost (executor driver) (1/1) 17:47:41.314 INFO TaskSchedulerImpl - Removed TaskSet 1048.0, whose tasks have all completed, from pool 17:47:41.314 INFO DAGScheduler - ResultStage 1048 (collectAsMap at MarkDuplicatesSparkUtils.java:542) finished in 0.006 s 17:47:41.314 INFO DAGScheduler - Job 524 is finished. Cancelling potential speculative or zombie tasks for this job 17:47:41.314 INFO TaskSchedulerImpl - Killing all running tasks in stage 1048: Stage finished 17:47:41.314 INFO DAGScheduler - Job 524 finished: collectAsMap at MarkDuplicatesSparkUtils.java:542, took 0.038264 s 17:47:41.326 INFO MemoryStore - Block broadcast_1920 stored as values in memory (estimated size 93.5 KiB, free 1917.4 MiB) 17:47:41.327 INFO MemoryStore - Block broadcast_1920_piece0 stored as bytes in memory (estimated size 5.6 KiB, free 1917.4 MiB) 17:47:41.327 INFO BlockManagerInfo - Added broadcast_1920_piece0 in memory on localhost:42685 (size: 5.6 KiB, free: 1919.4 MiB) 17:47:41.327 INFO SparkContext - Created broadcast 1920 from broadcast at ReadsSparkSink.java:133 17:47:41.331 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 17:47:41.331 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 17:47:41.331 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 17:47:41.345 INFO SparkContext - Starting job: runJob at SparkHadoopWriter.scala:83 17:47:41.346 INFO DAGScheduler - Registering RDD 5385 (mapToPair at SparkUtils.java:161) as input to shuffle 288 17:47:41.346 INFO DAGScheduler - Got job 525 (runJob at SparkHadoopWriter.scala:83) with 1 output partitions 17:47:41.346 INFO DAGScheduler - Final stage: ResultStage 1053 (runJob at SparkHadoopWriter.scala:83) 17:47:41.346 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1052) 17:47:41.346 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1052) 17:47:41.346 INFO DAGScheduler - Submitting ShuffleMapStage 1052 (MapPartitionsRDD[5385] at mapToPair at SparkUtils.java:161), which has no missing parents 17:47:41.347 INFO MemoryStore - Block broadcast_1921 stored as values in memory (estimated size 39.1 KiB, free 1917.4 MiB) 17:47:41.348 INFO MemoryStore - Block broadcast_1921_piece0 stored as bytes in memory (estimated size 14.1 KiB, free 1917.4 MiB) 17:47:41.348 INFO BlockManagerInfo - Added broadcast_1921_piece0 in memory on localhost:42685 (size: 14.1 KiB, free: 1919.4 MiB) 17:47:41.348 INFO SparkContext - Created broadcast 1921 from broadcast at DAGScheduler.scala:1580 17:47:41.348 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1052 (MapPartitionsRDD[5385] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 17:47:41.348 INFO TaskSchedulerImpl - Adding task set 1052.0 with 1 tasks resource profile 0 17:47:41.349 INFO TaskSetManager - Starting task 0.0 in stage 1052.0 (TID 966) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 17:47:41.349 INFO Executor - Running task 0.0 in stage 1052.0 (TID 966) 17:47:41.351 INFO ShuffleBlockFetcherIterator - Getting 1 (4.3 KiB) non-empty blocks including 1 (4.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.351 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.353 INFO ShuffleBlockFetcherIterator - Getting 1 (652.0 B) non-empty blocks including 1 (652.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.353 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.355 INFO Executor - Finished task 0.0 in stage 1052.0 (TID 966). 1922 bytes result sent to driver 17:47:41.355 INFO TaskSetManager - Finished task 0.0 in stage 1052.0 (TID 966) in 6 ms on localhost (executor driver) (1/1) 17:47:41.355 INFO TaskSchedulerImpl - Removed TaskSet 1052.0, whose tasks have all completed, from pool 17:47:41.355 INFO DAGScheduler - ShuffleMapStage 1052 (mapToPair at SparkUtils.java:161) finished in 0.009 s 17:47:41.355 INFO DAGScheduler - looking for newly runnable stages 17:47:41.355 INFO DAGScheduler - running: HashSet() 17:47:41.355 INFO DAGScheduler - waiting: HashSet(ResultStage 1053) 17:47:41.355 INFO DAGScheduler - failed: HashSet() 17:47:41.355 INFO DAGScheduler - Submitting ResultStage 1053 (MapPartitionsRDD[5391] at saveAsTextFile at SamSink.java:65), which has no missing parents 17:47:41.361 INFO MemoryStore - Block broadcast_1922 stored as values in memory (estimated size 184.3 KiB, free 1917.2 MiB) 17:47:41.362 INFO MemoryStore - Block broadcast_1922_piece0 stored as bytes in memory (estimated size 78.3 KiB, free 1917.1 MiB) 17:47:41.362 INFO BlockManagerInfo - Added broadcast_1922_piece0 in memory on localhost:42685 (size: 78.3 KiB, free: 1919.3 MiB) 17:47:41.363 INFO SparkContext - Created broadcast 1922 from broadcast at DAGScheduler.scala:1580 17:47:41.363 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1053 (MapPartitionsRDD[5391] at saveAsTextFile at SamSink.java:65) (first 15 tasks are for partitions Vector(0)) 17:47:41.363 INFO TaskSchedulerImpl - Adding task set 1053.0 with 1 tasks resource profile 0 17:47:41.363 INFO TaskSetManager - Starting task 0.0 in stage 1053.0 (TID 967) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 17:47:41.364 INFO Executor - Running task 0.0 in stage 1053.0 (TID 967) 17:47:41.369 INFO ShuffleBlockFetcherIterator - Getting 1 (3.9 KiB) non-empty blocks including 1 (3.9 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 17:47:41.369 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 17:47:41.371 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 17:47:41.371 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 17:47:41.371 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 17:47:41.380 INFO FileOutputCommitter - Saved output of task 'attempt_202510271747411713958835285011694_5391_m_000000_0' to file:/tmp/MarkDups15095137206789708429/MarkDups.sam.parts/_temporary/0/task_202510271747411713958835285011694_5391_m_000000 17:47:41.380 INFO SparkHadoopMapRedUtil - attempt_202510271747411713958835285011694_5391_m_000000_0: Committed. Elapsed time: 0 ms. 17:47:41.380 INFO Executor - Finished task 0.0 in stage 1053.0 (TID 967). 1858 bytes result sent to driver 17:47:41.381 INFO TaskSetManager - Finished task 0.0 in stage 1053.0 (TID 967) in 18 ms on localhost (executor driver) (1/1) 17:47:41.381 INFO TaskSchedulerImpl - Removed TaskSet 1053.0, whose tasks have all completed, from pool 17:47:41.381 INFO DAGScheduler - ResultStage 1053 (runJob at SparkHadoopWriter.scala:83) finished in 0.025 s 17:47:41.381 INFO DAGScheduler - Job 525 is finished. Cancelling potential speculative or zombie tasks for this job 17:47:41.381 INFO TaskSchedulerImpl - Killing all running tasks in stage 1053: Stage finished 17:47:41.381 INFO DAGScheduler - Job 525 finished: runJob at SparkHadoopWriter.scala:83, took 0.035682 s 17:47:41.381 INFO SparkHadoopWriter - Start to commit write Job job_202510271747411713958835285011694_5391. 17:47:41.385 INFO SparkHadoopWriter - Write Job job_202510271747411713958835285011694_5391 committed. Elapsed time: 3 ms. 17:47:41.390 INFO HadoopFileSystemWrapper - Concatenating 2 parts to /tmp/MarkDups15095137206789708429/MarkDups.sam 17:47:41.393 INFO HadoopFileSystemWrapper - Concatenating to /tmp/MarkDups15095137206789708429/MarkDups.sam done 17:47:41.394 INFO MarkDuplicatesSpark - Shutting down engine [October 27, 2025 at 5:47:41 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 17:47:41.396 WARN IntelDeflaterFactory - Intel Deflater not supported, using Java.util.zip.Deflater [October 27, 2025 at 5:47:41 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:42 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:42 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:42 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:43 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:43 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:43 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:44 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:44 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:44 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:45 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:45 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:45 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:45 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:46 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:46 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:46 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:47 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 17:47:47.510 ERROR Executor - Exception in task 0.0 in stage 1273.0 (TID 1114) org.broadinstitute.hellbender.exceptions.UserException$ReadMissingReadGroup: SAM/BAM/CRAM file (unknown) is malformed: Read RUNID:7:1203:2886:82292 is missing the read group (RG) tag, which is required by the GATK. Please use http://gatkforums.broadinstitute.org/discussion/59/companion-utilities-replacereadgroups to fix this problem at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.getLibraryForRead(MarkDuplicatesSparkUtils.java:64) ~[main/:?] at org.broadinstitute.hellbender.utils.read.markduplicates.sparkrecords.EmptyFragment.<init>(EmptyFragment.java:36) ~[main/:?] at org.broadinstitute.hellbender.utils.read.markduplicates.sparkrecords.MarkDuplicatesSparkRecord.newEmptyFragment(MarkDuplicatesSparkRecord.java:44) ~[main/:?] at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.lambda$transformToDuplicateNames$0(MarkDuplicatesSparkUtils.java:141) ~[main/:?] at java.base/java.util.stream.ReferencePipeline$15$1.accept(ReferencePipeline.java:540) ~[?:?] at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?] at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?] at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?] at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) ~[?:?] at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?] at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) ~[?:?] at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.lambda$transformToDuplicateNames$5ac2632f$1(MarkDuplicatesSparkUtils.java:150) ~[main/:?] at org.apache.spark.api.java.JavaRDDLike.$anonfun$flatMapToPair$1(JavaRDDLike.scala:143) ~[spark-core_2.13-3.5.0.jar:3.5.0] at scala.collection.Iterator$$anon$10.nextCur(Iterator.scala:594) ~[scala-library-2.13.14.jar:?] at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:608) ~[scala-library-2.13.14.jar:?] at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:104) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.scheduler.Task.run(Task.scala:141) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) ~[spark-common-utils_2.13-3.5.0.jar:3.5.0] at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) ~[spark-common-utils_2.13-3.5.0.jar:3.5.0] at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623) [spark-core_2.13-3.5.0.jar:3.5.0] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at java.base/java.lang.Thread.run(Thread.java:833) [?:?] 17:47:47.512 ERROR TaskSetManager - Task 0 in stage 1273.0 failed 1 times; aborting job [October 27, 2025 at 5:47:47 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 17:47:47.731 ERROR Executor - Exception in task 0.0 in stage 1280.0 (TID 1117) org.broadinstitute.hellbender.exceptions.UserException$HeaderMissingReadGroup: SAM/BAM/CRAM file (unknown) is malformed: Read RUNID:7:1203:2886:82292 contains an (RG) tag with the group NotADuplicateGroup which is not found in the file header. at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.getLibraryForRead(MarkDuplicatesSparkUtils.java:66) ~[main/:?] at org.broadinstitute.hellbender.utils.read.markduplicates.sparkrecords.EmptyFragment.<init>(EmptyFragment.java:36) ~[main/:?] at org.broadinstitute.hellbender.utils.read.markduplicates.sparkrecords.MarkDuplicatesSparkRecord.newEmptyFragment(MarkDuplicatesSparkRecord.java:44) ~[main/:?] at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.lambda$transformToDuplicateNames$0(MarkDuplicatesSparkUtils.java:141) ~[main/:?] at java.base/java.util.stream.ReferencePipeline$15$1.accept(ReferencePipeline.java:540) ~[?:?] at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?] at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?] at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?] at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) ~[?:?] at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?] at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) ~[?:?] at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.lambda$transformToDuplicateNames$5ac2632f$1(MarkDuplicatesSparkUtils.java:150) ~[main/:?] at org.apache.spark.api.java.JavaRDDLike.$anonfun$flatMapToPair$1(JavaRDDLike.scala:143) ~[spark-core_2.13-3.5.0.jar:3.5.0] at scala.collection.Iterator$$anon$10.nextCur(Iterator.scala:594) ~[scala-library-2.13.14.jar:?] at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:608) ~[scala-library-2.13.14.jar:?] at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:104) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.scheduler.Task.run(Task.scala:141) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) ~[spark-common-utils_2.13-3.5.0.jar:3.5.0] at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) ~[spark-common-utils_2.13-3.5.0.jar:3.5.0] at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623) [spark-core_2.13-3.5.0.jar:3.5.0] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at java.base/java.lang.Thread.run(Thread.java:833) [?:?] 17:47:47.733 ERROR TaskSetManager - Task 0 in stage 1280.0 failed 1 times; aborting job [October 27, 2025 at 5:47:47 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:48 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:48 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:48 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:48 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:49 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:49 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:49 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:50 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:50 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:50 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:51 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:51 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:51 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:52 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:52 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:52 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:53 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:53 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:53 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:54 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:54 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:54 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:54 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:55 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:55 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:55 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:56 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:56 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:56 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:57 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:57 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:57 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:58 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:58 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:58 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:59 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:59 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:59 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:47:59 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:00 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:00 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:00 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:01 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:01 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:01 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:02 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:02 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:02 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:03 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:03 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:03 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:04 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:04 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:04 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:04 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:05 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:05 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:05 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:06 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:06 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:06 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:07 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:07 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:07 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:08 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:08 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:08 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:09 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:09 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:09 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:09 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:10 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:10 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:10 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:11 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:11 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:11 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:12 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:12 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:12 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256 [October 27, 2025 at 5:48:13 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1631584256