Class org.broadinstitute.hellbender.tools.spark.pipelines.MarkDuplicatesSparkIntegrationTest
|
100%
successful |
Tests
Test | Duration | Result |
---|---|---|
assertThatExpectedOutputUpdateToggleIsDisabled | 0s | passed |
testAssertAssumeUnsortedFilesAreQueryGroupedFiles | 0.571s | passed |
testAssertCorrectSortOrderMultipleBams | 0.132s | passed |
testAssertCorrectSortOrderMultipleBamsOverriding | 0.537s | passed |
testBulkFragmentsNoDuplicates | 1.049s | passed |
testBulkFragmentsWithDuplicates | 2.096s | passed |
testDifferentChromosomesInOppositeOrder | 0.313s | passed |
testDuplicateDetectionDataProviderWithMetrics[0](src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/example.chr1.1-1K.unmarkedDups.bam, src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/expected.picard-2.15.0.sorted.chr1.1-1K.unmarkedDups.markDuplicate.metrics) | 0.289s | passed |
testDuplicateDetectionDataProviderWithMetrics[1](src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/optical_dupes.bam, src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/expected.picard-2.15.0.sorted.optical_dupes.markDuplicate.metrics) | 0.295s | passed |
testDuplicateDetectionDataProviderWithMetrics[2](src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/inputSingleLibrarySolexa16404.bam, src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/expected.picard-2.15.0.sorted.inputSingleLibrarySolexa16404.metrics) | 0.264s | passed |
testFlowModeFlag | 1.579s | passed |
testHashCollisionHandling | 1.087s | passed |
testMDOrder[0](src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/mdOrderBug.bam, src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/expected.mdOrderBug.bam) | 0.364s | passed |
testMDOrder[1](src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/mdOrderBug2.bam, src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/expected.mdOrderBug2.bam) | 0.368s | passed |
testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[0](true, true, true) | 0.357s | passed |
testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[1](true, false, true) | 0.316s | passed |
testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[2](false, true, true) | 0.313s | passed |
testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[3](true, true, false) | 0.317s | passed |
testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[4](true, false, false) | 0.316s | passed |
testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[5](false, true, false) | 0.332s | passed |
testMappedFragmentAndMappedPairFirstOfPairNonPrimary | 0.314s | passed |
testMappedFragmentAndMatePairFirstUnmapped | 0.319s | passed |
testMappedFragmentAndMatePairSecondUnmapped | 0.327s | passed |
testMappedPairAndMappedFragmentAndMatePairFirstUnmapped | 0.320s | passed |
testMappedPairAndMappedFragmentAndMatePairSecondUnmapped | 0.345s | passed |
testMappedPairAndMatePairFirstOppositeStrandSecondUnmapped | 0.332s | passed |
testMappedPairAndMatePairFirstUnmapped | 0.311s | passed |
testMappedPairAndMatePairSecondUnmapped | 0.320s | passed |
testMappedPairWithFirstEndSamePositionAndOther | 0.317s | passed |
testMappedPairWithSamePosition | 0.312s | passed |
testMappedPairWithSamePositionSameCigar | 0.321s | passed |
testMarkDuplicatesSparkDeletingDuplicateReads[0]([Ljava.io.File;@e91f37b, 20, 0, {Solexa-16419=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16416=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16404=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16406=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16412=[0, 1, 0, 0, 0, 0, 0.0, 0]}) | 0.425s | passed |
testMarkDuplicatesSparkDeletingDuplicateReads[1]([Ljava.io.File;@412445a7, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.399s | passed |
testMarkDuplicatesSparkDeletingDuplicateReads[2]([Ljava.io.File;@243d8998, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.389s | passed |
testMarkDuplicatesSparkDeletingDuplicateReads[3]([Ljava.io.File;@5f92f3aa, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.364s | passed |
testMarkDuplicatesSparkDeletingDuplicateReads[4]([Ljava.io.File;@1945f9ec, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.365s | passed |
testMarkDuplicatesSparkDeletingDuplicateReads[5]([Ljava.io.File;@63de6306, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.324s | passed |
testMarkDuplicatesSparkDeletingDuplicateReads[6]([Ljava.io.File;@37c9c0a3, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.285s | passed |
testMarkDuplicatesSparkDeletingOpticalDuplicateReads[0]([Ljava.io.File;@23292a1d, 20, 0, {Solexa-16419=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16416=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16404=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16406=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16412=[0, 1, 0, 0, 0, 0, 0.0, 0]}) | 0.369s | passed |
testMarkDuplicatesSparkDeletingOpticalDuplicateReads[1]([Ljava.io.File;@1513f993, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.394s | passed |
testMarkDuplicatesSparkDeletingOpticalDuplicateReads[2]([Ljava.io.File;@4227894a, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.383s | passed |
testMarkDuplicatesSparkDeletingOpticalDuplicateReads[3]([Ljava.io.File;@3e1ed8e5, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.353s | passed |
testMarkDuplicatesSparkDeletingOpticalDuplicateReads[4]([Ljava.io.File;@5ce4d5e8, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.362s | passed |
testMarkDuplicatesSparkDeletingOpticalDuplicateReads[5]([Ljava.io.File;@597ed255, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.209s | passed |
testMarkDuplicatesSparkDeletingOpticalDuplicateReads[6]([Ljava.io.File;@4655ba7d, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.208s | passed |
testMarkDuplicatesSparkIntegrationTestLocal[0]([Ljava.io.File;@595f2efa, 20, 0, {Solexa-16419=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16416=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16404=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16406=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16412=[0, 1, 0, 0, 0, 0, 0.0, 0]}) | 0.369s | passed |
testMarkDuplicatesSparkIntegrationTestLocal[1]([Ljava.io.File;@5038085b, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.388s | passed |
testMarkDuplicatesSparkIntegrationTestLocal[2]([Ljava.io.File;@3c926f0c, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.474s | passed |
testMarkDuplicatesSparkIntegrationTestLocal[3]([Ljava.io.File;@efeae0e, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.350s | passed |
testMarkDuplicatesSparkIntegrationTestLocal[4]([Ljava.io.File;@468e0d8b, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.346s | passed |
testMarkDuplicatesSparkIntegrationTestLocal[5]([Ljava.io.File;@62cc785e, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.238s | passed |
testMarkDuplicatesSparkIntegrationTestLocal[6]([Ljava.io.File;@153cb5a7, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.189s | passed |
testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[0]([Ljava.io.File;@31e65167, 20, 0, {Solexa-16419=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16416=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16404=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16406=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16412=[0, 1, 0, 0, 0, 0, 0.0, 0]}) | 0.353s | passed |
testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[1]([Ljava.io.File;@e1f5787, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.370s | passed |
testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[2]([Ljava.io.File;@69722efc, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.415s | passed |
testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[3]([Ljava.io.File;@2971cfe6, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.334s | passed |
testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[4]([Ljava.io.File;@267672d7, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.345s | passed |
testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[5]([Ljava.io.File;@7870c26a, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.163s | passed |
testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[6]([Ljava.io.File;@1bca26d3, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.193s | passed |
testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[0]([Ljava.io.File;@29d8ee88, 20, 0, {Solexa-16419=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16416=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16404=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16406=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16412=[0, 1, 0, 0, 0, 0, 0.0, 0]}) | 0.358s | passed |
testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[1]([Ljava.io.File;@ce6d49, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.403s | passed |
testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[2]([Ljava.io.File;@1b6701cf, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.380s | passed |
testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[3]([Ljava.io.File;@27e1a278, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.347s | passed |
testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[4]([Ljava.io.File;@11194cbf, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.338s | passed |
testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[5]([Ljava.io.File;@40acfa2, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.169s | passed |
testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[6]([Ljava.io.File;@6520f873, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.202s | passed |
testMatePairFirstUnmapped | 0.314s | passed |
testMatePairSecondUnmapped | 0.306s | passed |
testNoReadGroupInRead | 0.253s | passed |
testNonExistantReadGroupInRead | 0.224s | passed |
testNullOpticalDuplicates | 0.360s | passed |
testOpticalDuplicateClusterOneEndSamePositionOneCluster | 0.320s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicatesWithinPixelDistance | 0.308s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[0](RUNID:7:1203:2886:82292, RUNID:7:1205:3886:16834) | 0.337s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[10](RUNID:7:1203:6886:16756, RUNID:7:1203:4886:26756) | 0.320s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[11](RUNID:7:1203:2886:34756, RUNID:7:1203:2886:36756) | 0.310s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[12](RUNID:7:1203:2886:35756, RUNID:7:1203:2886:36756) | 0.319s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[13](RUNID:7:1203:2886:37756, RUNID:7:1203:2886:36756) | 0.314s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[14](RUNID:7:1203:2886:38756, RUNID:7:1203:2886:36756) | 0.313s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[15](HK3T5CCXX160204:3:1112:11586:37067, HK3T5CCXX160204:3:1112:11586:32144) | 0.318s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[1](RUNID:7:1203:2886:16756, RUNID:7:1205:3886:16756) | 0.328s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[2](RUNID:7:1204:2886:16756, RUNID:7:1205:3886:16756) | 0.324s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[3](RUNID:7:1205:2886:16756, RUNID:7:1205:3886:16756) | 0.321s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[4](RUNID:7:1206:2886:16756, RUNID:7:1205:3886:16756) | 0.308s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[5](RUNID:7:1207:2886:16756, RUNID:7:1205:3886:16756) | 0.330s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[6](RUNID:7:1203:2886:16756, RUNID:7:1203:4886:26756) | 0.319s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[7](RUNID:7:1203:3886:16756, RUNID:7:1203:4886:26756) | 0.317s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[8](RUNID:7:1203:4886:16756, RUNID:7:1203:4886:26756) | 0.307s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[9](RUNID:7:1203:5886:16756, RUNID:7:1203:4886:26756) | 0.317s | passed |
testOpticalDuplicateClusterSamePositionOneOpticalDuplicatesWithinPixelDistance | 0.319s | passed |
testOpticalDuplicateClustersAddingSecondEndFirstSameCoordinate | 0.305s | passed |
testOpticalDuplicateFinding | 0.314s | passed |
testOpticalDuplicateFindingPxelDistance | 0.308s | passed |
testOpticalDuplicatesAndPCRDuplicatesOrientationDifference | 0.302s | passed |
testOpticalDuplicatesDifferentReadGroups | 0.321s | passed |
testOpticalDuplicatesTheSameReadGroup | 0.307s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[0](RUNID:7:1203:2886:82292, RUNID:7:1205:3886:16834) | 0.319s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[10](RUNID:7:1203:6886:16756, RUNID:7:1203:4886:26756) | 0.320s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[11](RUNID:7:1203:2886:34756, RUNID:7:1203:2886:36756) | 0.314s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[12](RUNID:7:1203:2886:35756, RUNID:7:1203:2886:36756) | 0.314s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[13](RUNID:7:1203:2886:37756, RUNID:7:1203:2886:36756) | 0.329s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[14](RUNID:7:1203:2886:38756, RUNID:7:1203:2886:36756) | 0.315s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[15](HK3T5CCXX160204:3:1112:11586:37067, HK3T5CCXX160204:3:1112:11586:32144) | 0.324s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[1](RUNID:7:1203:2886:16756, RUNID:7:1205:3886:16756) | 0.308s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[2](RUNID:7:1204:2886:16756, RUNID:7:1205:3886:16756) | 0.309s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[3](RUNID:7:1205:2886:16756, RUNID:7:1205:3886:16756) | 0.318s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[4](RUNID:7:1206:2886:16756, RUNID:7:1205:3886:16756) | 0.312s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[5](RUNID:7:1207:2886:16756, RUNID:7:1205:3886:16756) | 0.315s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[6](RUNID:7:1203:2886:16756, RUNID:7:1203:4886:26756) | 0.308s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[7](RUNID:7:1203:3886:16756, RUNID:7:1203:4886:26756) | 0.319s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[8](RUNID:7:1203:4886:16756, RUNID:7:1203:4886:26756) | 0.315s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[9](RUNID:7:1203:5886:16756, RUNID:7:1203:4886:26756) | 0.326s | passed |
testPathologicalOrderingAtTheSamePosition | 0.317s | passed |
testReadSameStartPositionOrientationOverride | 0.331s | passed |
testReadsHaveSameStartPositionButDifferentChromosomeNonEquivalence | 0.332s | passed |
testSecondEndIsBeforeFirstInCoordinate | 0.327s | passed |
testSingleMappedFragment | 0.310s | passed |
testSingleMappedFragmentAndSingleMappedPair | 0.308s | passed |
testSingleMappedFragmentAndTwoMappedPairs | 0.316s | passed |
testSingleMappedPair | 0.307s | passed |
testSingleUnmappedFragment | 0.317s | passed |
testSingleUnmappedPair | 0.302s | passed |
testStackOverFlowPairSetSwap | 0.321s | passed |
testSupplementaryReadUnmappedMate | 0.311s | passed |
testThreeGroupsOnDifferentChromosomesOfThreeMappedPairs | 0.310s | passed |
testThreeMappedPairs | 0.313s | passed |
testThreeMappedPairsWithMatchingSecondMate | 0.320s | passed |
testTwoGroupsOnDifferentChromosomesOfThreeMappedPairs | 0.313s | passed |
testTwoGroupsOnDifferentChromosomesOfTwoFragments | 0.303s | passed |
testTwoGroupsOnDifferentChromosomesOfTwoMappedPairs | 0.304s | passed |
testTwoMappedFragments | 0.329s | passed |
testTwoMappedPairWithSamePosition | 0.326s | passed |
testTwoMappedPairWithSamePositionDifferentStrands | 0.317s | passed |
testTwoMappedPairWithSamePositionDifferentStrands2 | 0.311s | passed |
testTwoMappedPairs | 0.312s | passed |
testTwoMappedPairsAndMappedSecondaryFragment | 0.320s | passed |
testTwoMappedPairsAndTerminalUnmappedFragment | 0.306s | passed |
testTwoMappedPairsAndTerminalUnmappedPair | 0.310s | passed |
testTwoMappedPairsMatesSoftClipped | 0.305s | passed |
testTwoMappedPairsWithOppositeOrientations | 0.316s | passed |
testTwoMappedPairsWithOppositeOrientationsNumberTwo | 0.306s | passed |
testTwoMappedPairsWithSoftClipping | 0.315s | passed |
testTwoMappedPairsWithSoftClippingBoth | 0.332s | passed |
testTwoMappedPairsWithSoftClippingFirstOfPairOnly | 0.304s | passed |
testTwoMappedPairsWithSoftClippingFirstOfPairOnlyNoMateCigar | 0.312s | passed |
testTwoMappedPairsWithSupplementaryReads[0](true, true, true) | 0.308s | passed |
testTwoMappedPairsWithSupplementaryReads[1](true, false, true) | 0.316s | passed |
testTwoMappedPairsWithSupplementaryReads[2](false, true, true) | 0.304s | passed |
testTwoMappedPairsWithSupplementaryReads[3](true, true, false) | 0.305s | passed |
testTwoMappedPairsWithSupplementaryReads[4](true, false, false) | 0.315s | passed |
testTwoMappedPairsWithSupplementaryReads[5](false, true, false) | 0.313s | passed |
testTwoUnmappedFragments | 0.315s | passed |
Standard error
[June 12, 2025 at 8:45:27 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1254096896 [June 12, 2025 at 8:45:27 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1254096896 [June 12, 2025 at 8:45:28 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1254096896 [June 12, 2025 at 8:45:29 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1254096896 [June 12, 2025 at 8:45:31 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1254096896 20:45:31.115 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:45:31.116 INFO MarkDuplicatesSpark - org.broadinstitute.hellbender.tools.spark.transforms.markduplicates vUnavailable 20:45:31.116 INFO MarkDuplicatesSpark - For support and documentation go to https://software.broadinstitute.org/gatk/ 20:45:31.116 INFO MarkDuplicatesSpark - Executing as runner@pkrvmxyh4eaekms on Linux v6.11.0-1015-azure amd64 20:45:31.116 INFO MarkDuplicatesSpark - Java runtime: OpenJDK 64-Bit Server VM v17.0.6+10 20:45:31.116 INFO MarkDuplicatesSpark - Start Date/Time: June 12, 2025 at 8:45:31 PM UTC 20:45:31.116 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:45:31.116 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:45:31.116 INFO MarkDuplicatesSpark - HTSJDK Defaults.COMPRESSION_LEVEL : 2 20:45:31.116 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 20:45:31.116 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 20:45:31.116 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 20:45:31.116 INFO MarkDuplicatesSpark - Deflater: IntelDeflater 20:45:31.116 INFO MarkDuplicatesSpark - Inflater: IntelInflater 20:45:31.116 INFO MarkDuplicatesSpark - GCS max retries/reopens: 20 20:45:31.116 INFO MarkDuplicatesSpark - Requester pays: disabled 20:45:31.116 INFO MarkDuplicatesSpark - Initializing engine 20:45:31.116 INFO MarkDuplicatesSpark - Done initializing engine 20:45:31.117 INFO MarkDuplicatesSpark - Spark verbosity set to INFO (see --spark-verbosity argument) 20:45:31.119 INFO MemoryStore - Block broadcast_1353 stored as values in memory (estimated size 306.3 KiB, free 1903.4 MiB) 20:45:31.126 INFO MemoryStore - Block broadcast_1353_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1903.4 MiB) 20:45:31.126 INFO BlockManagerInfo - Added broadcast_1353_piece0 in memory on localhost:34191 (size: 64.4 KiB, free: 1918.9 MiB) 20:45:31.127 INFO SparkContext - Created broadcast 1353 from newAPIHadoopFile at PathSplitSource.java:96 20:45:31.148 INFO MemoryStore - Block broadcast_1354 stored as values in memory (estimated size 306.3 KiB, free 1903.1 MiB) 20:45:31.154 INFO MemoryStore - Block broadcast_1354_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1903.0 MiB) 20:45:31.155 INFO BlockManagerInfo - Added broadcast_1354_piece0 in memory on localhost:34191 (size: 64.4 KiB, free: 1918.8 MiB) 20:45:31.155 INFO SparkContext - Created broadcast 1354 from newAPIHadoopFile at PathSplitSource.java:96 20:45:31.189 INFO SparkContext - Starting job: collect at SparkUtils.java:205 20:45:31.190 INFO FileInputFormat - Total input files to process : 1 20:45:31.191 INFO DAGScheduler - Registering RDD 3456 (mapToPair at SparkUtils.java:161) as input to shuffle 97 20:45:31.191 INFO DAGScheduler - Got job 417 (collect at SparkUtils.java:205) with 1 output partitions 20:45:31.191 INFO DAGScheduler - Final stage: ResultStage 726 (collect at SparkUtils.java:205) 20:45:31.191 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 725) 20:45:31.191 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 725) 20:45:31.192 INFO DAGScheduler - Submitting ShuffleMapStage 725 (MapPartitionsRDD[3456] at mapToPair at SparkUtils.java:161), which has no missing parents 20:45:31.214 INFO MemoryStore - Block broadcast_1355 stored as values in memory (estimated size 460.4 KiB, free 1902.6 MiB) 20:45:31.217 INFO MemoryStore - Block broadcast_1355_piece0 stored as bytes in memory (estimated size 202.5 KiB, free 1902.4 MiB) 20:45:31.217 INFO BlockManagerInfo - Added broadcast_1355_piece0 in memory on localhost:34191 (size: 202.5 KiB, free: 1918.6 MiB) 20:45:31.217 INFO SparkContext - Created broadcast 1355 from broadcast at DAGScheduler.scala:1580 20:45:31.218 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 725 (MapPartitionsRDD[3456] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 20:45:31.218 INFO TaskSchedulerImpl - Adding task set 725.0 with 1 tasks resource profile 0 20:45:31.218 INFO TaskSetManager - Starting task 0.0 in stage 725.0 (TID 667) (localhost, executor driver, partition 0, PROCESS_LOCAL, 9735 bytes) 20:45:31.220 INFO Executor - Running task 0.0 in stage 725.0 (TID 667) 20:45:31.251 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/mdOrderBug.bam:0+1507 20:45:31.256 INFO Executor - Finished task 0.0 in stage 725.0 (TID 667). 1148 bytes result sent to driver 20:45:31.256 INFO TaskSetManager - Finished task 0.0 in stage 725.0 (TID 667) in 38 ms on localhost (executor driver) (1/1) 20:45:31.256 INFO TaskSchedulerImpl - Removed TaskSet 725.0, whose tasks have all completed, from pool 20:45:31.256 INFO DAGScheduler - ShuffleMapStage 725 (mapToPair at SparkUtils.java:161) finished in 0.064 s 20:45:31.257 INFO DAGScheduler - looking for newly runnable stages 20:45:31.257 INFO DAGScheduler - running: HashSet() 20:45:31.257 INFO DAGScheduler - waiting: HashSet(ResultStage 726) 20:45:31.257 INFO DAGScheduler - failed: HashSet() 20:45:31.257 INFO DAGScheduler - Submitting ResultStage 726 (MapPartitionsRDD[3459] at mapPartitions at SparkUtils.java:188), which has no missing parents 20:45:31.258 INFO MemoryStore - Block broadcast_1356 stored as values in memory (estimated size 7.4 KiB, free 1902.4 MiB) 20:45:31.258 INFO MemoryStore - Block broadcast_1356_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1902.4 MiB) 20:45:31.258 INFO BlockManagerInfo - Added broadcast_1356_piece0 in memory on localhost:34191 (size: 4.1 KiB, free: 1918.6 MiB) 20:45:31.259 INFO SparkContext - Created broadcast 1356 from broadcast at DAGScheduler.scala:1580 20:45:31.259 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 726 (MapPartitionsRDD[3459] at mapPartitions at SparkUtils.java:188) (first 15 tasks are for partitions Vector(0)) 20:45:31.259 INFO TaskSchedulerImpl - Adding task set 726.0 with 1 tasks resource profile 0 20:45:31.260 INFO TaskSetManager - Starting task 0.0 in stage 726.0 (TID 668) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 20:45:31.260 INFO Executor - Running task 0.0 in stage 726.0 (TID 668) 20:45:31.262 INFO ShuffleBlockFetcherIterator - Getting 1 (405.0 B) non-empty blocks including 1 (405.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.262 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.265 INFO Executor - Finished task 0.0 in stage 726.0 (TID 668). 2238 bytes result sent to driver 20:45:31.265 INFO TaskSetManager - Finished task 0.0 in stage 726.0 (TID 668) in 6 ms on localhost (executor driver) (1/1) 20:45:31.265 INFO TaskSchedulerImpl - Removed TaskSet 726.0, whose tasks have all completed, from pool 20:45:31.266 INFO DAGScheduler - ResultStage 726 (collect at SparkUtils.java:205) finished in 0.009 s 20:45:31.266 INFO DAGScheduler - Job 417 is finished. Cancelling potential speculative or zombie tasks for this job 20:45:31.266 INFO TaskSchedulerImpl - Killing all running tasks in stage 726: Stage finished 20:45:31.266 INFO DAGScheduler - Job 417 finished: collect at SparkUtils.java:205, took 0.076859 s 20:45:31.272 INFO MemoryStore - Block broadcast_1357 stored as values in memory (estimated size 1632.0 B, free 1902.4 MiB) 20:45:31.273 INFO MemoryStore - Block broadcast_1357_piece0 stored as bytes in memory (estimated size 145.0 B, free 1902.4 MiB) 20:45:31.273 INFO BlockManagerInfo - Added broadcast_1357_piece0 in memory on localhost:34191 (size: 145.0 B, free: 1918.6 MiB) 20:45:31.274 INFO SparkContext - Created broadcast 1357 from broadcast at MarkDuplicatesSparkUtils.java:126 20:45:31.274 INFO MemoryStore - Block broadcast_1358 stored as values in memory (estimated size 304.0 B, free 1902.3 MiB) 20:45:31.280 INFO MemoryStore - Block broadcast_1358_piece0 stored as bytes in memory (estimated size 37.0 B, free 1902.3 MiB) 20:45:31.281 INFO BlockManagerInfo - Added broadcast_1358_piece0 in memory on localhost:34191 (size: 37.0 B, free: 1918.6 MiB) 20:45:31.281 INFO SparkContext - Created broadcast 1358 from broadcast at MarkDuplicatesSparkUtils.java:127 20:45:31.282 INFO BlockManagerInfo - Removed broadcast_1356_piece0 on localhost:34191 in memory (size: 4.1 KiB, free: 1918.6 MiB) 20:45:31.282 INFO BlockManagerInfo - Removed broadcast_1346_piece0 on localhost:34191 in memory (size: 20.0 B, free: 1918.6 MiB) 20:45:31.283 INFO BlockManagerInfo - Removed broadcast_1353_piece0 on localhost:34191 in memory (size: 64.4 KiB, free: 1918.7 MiB) 20:45:31.283 INFO BlockManagerInfo - Removed broadcast_1347_piece0 on localhost:34191 in memory (size: 228.2 KiB, free: 1918.9 MiB) 20:45:31.284 INFO BlockManagerInfo - Removed broadcast_1352_piece0 on localhost:34191 in memory (size: 324.0 KiB, free: 1919.3 MiB) 20:45:31.284 INFO BlockManagerInfo - Removed broadcast_1343_piece0 on localhost:34191 in memory (size: 64.4 KiB, free: 1919.3 MiB) 20:45:31.286 INFO BlockManagerInfo - Removed broadcast_1345_piece0 on localhost:34191 in memory (size: 19.0 B, free: 1919.3 MiB) 20:45:31.286 INFO BlockManagerInfo - Removed broadcast_1348_piece0 on localhost:34191 in memory (size: 228.2 KiB, free: 1919.5 MiB) 20:45:31.287 INFO BlockManagerInfo - Removed broadcast_1355_piece0 on localhost:34191 in memory (size: 202.5 KiB, free: 1919.7 MiB) 20:45:31.299 INFO SparkContext - Starting job: collectAsMap at MarkDuplicatesSparkUtils.java:542 20:45:31.300 INFO DAGScheduler - Registering RDD 3468 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) as input to shuffle 100 20:45:31.300 INFO DAGScheduler - Registering RDD 3472 (mapToPair at MarkDuplicatesSpark.java:217) as input to shuffle 99 20:45:31.300 INFO DAGScheduler - Registering RDD 3476 (mapToPair at MarkDuplicatesSparkUtils.java:497) as input to shuffle 98 20:45:31.300 INFO DAGScheduler - Got job 418 (collectAsMap at MarkDuplicatesSparkUtils.java:542) with 1 output partitions 20:45:31.300 INFO DAGScheduler - Final stage: ResultStage 731 (collectAsMap at MarkDuplicatesSparkUtils.java:542) 20:45:31.300 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 730) 20:45:31.300 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 730) 20:45:31.300 INFO DAGScheduler - Submitting ShuffleMapStage 728 (MapPartitionsRDD[3468] at flatMapToPair at MarkDuplicatesSparkUtils.java:130), which has no missing parents 20:45:31.301 INFO MemoryStore - Block broadcast_1359 stored as values in memory (estimated size 24.2 KiB, free 1919.0 MiB) 20:45:31.302 INFO MemoryStore - Block broadcast_1359_piece0 stored as bytes in memory (estimated size 12.6 KiB, free 1919.0 MiB) 20:45:31.302 INFO BlockManagerInfo - Added broadcast_1359_piece0 in memory on localhost:34191 (size: 12.6 KiB, free: 1919.7 MiB) 20:45:31.303 INFO SparkContext - Created broadcast 1359 from broadcast at DAGScheduler.scala:1580 20:45:31.303 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 728 (MapPartitionsRDD[3468] at flatMapToPair at MarkDuplicatesSparkUtils.java:130) (first 15 tasks are for partitions Vector(0)) 20:45:31.303 INFO TaskSchedulerImpl - Adding task set 728.0 with 1 tasks resource profile 0 20:45:31.303 INFO TaskSetManager - Starting task 0.0 in stage 728.0 (TID 669) (localhost, executor driver, partition 0, NODE_LOCAL, 9890 bytes) 20:45:31.304 INFO Executor - Running task 0.0 in stage 728.0 (TID 669) 20:45:31.306 INFO ShuffleBlockFetcherIterator - Getting 1 (405.0 B) non-empty blocks including 1 (405.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.306 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.311 INFO Executor - Finished task 0.0 in stage 728.0 (TID 669). 1922 bytes result sent to driver 20:45:31.311 INFO TaskSetManager - Finished task 0.0 in stage 728.0 (TID 669) in 8 ms on localhost (executor driver) (1/1) 20:45:31.311 INFO DAGScheduler - ShuffleMapStage 728 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) finished in 0.010 s 20:45:31.311 INFO DAGScheduler - looking for newly runnable stages 20:45:31.311 INFO DAGScheduler - running: HashSet() 20:45:31.312 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 729, ShuffleMapStage 730, ResultStage 731) 20:45:31.312 INFO DAGScheduler - failed: HashSet() 20:45:31.312 INFO DAGScheduler - Submitting ShuffleMapStage 729 (MapPartitionsRDD[3472] at mapToPair at MarkDuplicatesSpark.java:217), which has no missing parents 20:45:31.312 INFO TaskSchedulerImpl - Removed TaskSet 728.0, whose tasks have all completed, from pool 20:45:31.313 INFO MemoryStore - Block broadcast_1360 stored as values in memory (estimated size 28.3 KiB, free 1918.9 MiB) 20:45:31.313 INFO MemoryStore - Block broadcast_1360_piece0 stored as bytes in memory (estimated size 14.8 KiB, free 1918.9 MiB) 20:45:31.314 INFO BlockManagerInfo - Added broadcast_1360_piece0 in memory on localhost:34191 (size: 14.8 KiB, free: 1919.7 MiB) 20:45:31.314 INFO SparkContext - Created broadcast 1360 from broadcast at DAGScheduler.scala:1580 20:45:31.314 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 729 (MapPartitionsRDD[3472] at mapToPair at MarkDuplicatesSpark.java:217) (first 15 tasks are for partitions Vector(0)) 20:45:31.314 INFO TaskSchedulerImpl - Adding task set 729.0 with 1 tasks resource profile 0 20:45:31.315 INFO TaskSetManager - Starting task 0.0 in stage 729.0 (TID 670) (localhost, executor driver, partition 0, NODE_LOCAL, 9365 bytes) 20:45:31.315 INFO Executor - Running task 0.0 in stage 729.0 (TID 670) 20:45:31.317 INFO ShuffleBlockFetcherIterator - Getting 1 (129.0 B) non-empty blocks including 1 (129.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.318 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.323 INFO Executor - Finished task 0.0 in stage 729.0 (TID 670). 1922 bytes result sent to driver 20:45:31.324 INFO TaskSetManager - Finished task 0.0 in stage 729.0 (TID 670) in 9 ms on localhost (executor driver) (1/1) 20:45:31.324 INFO TaskSchedulerImpl - Removed TaskSet 729.0, whose tasks have all completed, from pool 20:45:31.324 INFO DAGScheduler - ShuffleMapStage 729 (mapToPair at MarkDuplicatesSpark.java:217) finished in 0.012 s 20:45:31.324 INFO DAGScheduler - looking for newly runnable stages 20:45:31.324 INFO DAGScheduler - running: HashSet() 20:45:31.324 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 730, ResultStage 731) 20:45:31.324 INFO DAGScheduler - failed: HashSet() 20:45:31.325 INFO DAGScheduler - Submitting ShuffleMapStage 730 (MapPartitionsRDD[3476] at mapToPair at MarkDuplicatesSparkUtils.java:497), which has no missing parents 20:45:31.326 INFO MemoryStore - Block broadcast_1361 stored as values in memory (estimated size 19.0 KiB, free 1918.9 MiB) 20:45:31.326 INFO MemoryStore - Block broadcast_1361_piece0 stored as bytes in memory (estimated size 9.7 KiB, free 1918.9 MiB) 20:45:31.326 INFO BlockManagerInfo - Added broadcast_1361_piece0 in memory on localhost:34191 (size: 9.7 KiB, free: 1919.7 MiB) 20:45:31.327 INFO SparkContext - Created broadcast 1361 from broadcast at DAGScheduler.scala:1580 20:45:31.327 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 730 (MapPartitionsRDD[3476] at mapToPair at MarkDuplicatesSparkUtils.java:497) (first 15 tasks are for partitions Vector(0)) 20:45:31.327 INFO TaskSchedulerImpl - Adding task set 730.0 with 1 tasks resource profile 0 20:45:31.327 INFO TaskSetManager - Starting task 0.0 in stage 730.0 (TID 671) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 20:45:31.328 INFO Executor - Running task 0.0 in stage 730.0 (TID 671) 20:45:31.330 INFO ShuffleBlockFetcherIterator - Getting 1 (405.0 B) non-empty blocks including 1 (405.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.330 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.333 INFO ShuffleBlockFetcherIterator - Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.333 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.334 INFO Executor - Finished task 0.0 in stage 730.0 (TID 671). 1922 bytes result sent to driver 20:45:31.335 INFO TaskSetManager - Finished task 0.0 in stage 730.0 (TID 671) in 8 ms on localhost (executor driver) (1/1) 20:45:31.335 INFO TaskSchedulerImpl - Removed TaskSet 730.0, whose tasks have all completed, from pool 20:45:31.335 INFO DAGScheduler - ShuffleMapStage 730 (mapToPair at MarkDuplicatesSparkUtils.java:497) finished in 0.010 s 20:45:31.335 INFO DAGScheduler - looking for newly runnable stages 20:45:31.335 INFO DAGScheduler - running: HashSet() 20:45:31.335 INFO DAGScheduler - waiting: HashSet(ResultStage 731) 20:45:31.335 INFO DAGScheduler - failed: HashSet() 20:45:31.336 INFO DAGScheduler - Submitting ResultStage 731 (MapPartitionsRDD[3478] at mapValues at MarkDuplicatesSparkUtils.java:519), which has no missing parents 20:45:31.336 INFO MemoryStore - Block broadcast_1362 stored as values in memory (estimated size 20.2 KiB, free 1918.9 MiB) 20:45:31.344 INFO MemoryStore - Block broadcast_1362_piece0 stored as bytes in memory (estimated size 10.6 KiB, free 1918.9 MiB) 20:45:31.344 INFO BlockManagerInfo - Added broadcast_1362_piece0 in memory on localhost:34191 (size: 10.6 KiB, free: 1919.7 MiB) 20:45:31.345 INFO BlockManagerInfo - Removed broadcast_1360_piece0 on localhost:34191 in memory (size: 14.8 KiB, free: 1919.7 MiB) 20:45:31.345 INFO SparkContext - Created broadcast 1362 from broadcast at DAGScheduler.scala:1580 20:45:31.345 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 731 (MapPartitionsRDD[3478] at mapValues at MarkDuplicatesSparkUtils.java:519) (first 15 tasks are for partitions Vector(0)) 20:45:31.345 INFO TaskSchedulerImpl - Adding task set 731.0 with 1 tasks resource profile 0 20:45:31.345 INFO BlockManagerInfo - Removed broadcast_1359_piece0 on localhost:34191 in memory (size: 12.6 KiB, free: 1919.7 MiB) 20:45:31.346 INFO TaskSetManager - Starting task 0.0 in stage 731.0 (TID 672) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 20:45:31.346 INFO Executor - Running task 0.0 in stage 731.0 (TID 672) 20:45:31.348 INFO ShuffleBlockFetcherIterator - Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.349 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.352 INFO Executor - Finished task 0.0 in stage 731.0 (TID 672). 1944 bytes result sent to driver 20:45:31.352 INFO TaskSetManager - Finished task 0.0 in stage 731.0 (TID 672) in 6 ms on localhost (executor driver) (1/1) 20:45:31.353 INFO TaskSchedulerImpl - Removed TaskSet 731.0, whose tasks have all completed, from pool 20:45:31.353 INFO DAGScheduler - ResultStage 731 (collectAsMap at MarkDuplicatesSparkUtils.java:542) finished in 0.017 s 20:45:31.353 INFO DAGScheduler - Job 418 is finished. Cancelling potential speculative or zombie tasks for this job 20:45:31.353 INFO TaskSchedulerImpl - Killing all running tasks in stage 731: Stage finished 20:45:31.353 INFO DAGScheduler - Job 418 finished: collectAsMap at MarkDuplicatesSparkUtils.java:542, took 0.054238 s 20:45:31.366 INFO MemoryStore - Block broadcast_1363 stored as values in memory (estimated size 20.3 KiB, free 1918.9 MiB) 20:45:31.367 INFO MemoryStore - Block broadcast_1363_piece0 stored as bytes in memory (estimated size 1850.0 B, free 1918.9 MiB) 20:45:31.367 INFO BlockManagerInfo - Added broadcast_1363_piece0 in memory on localhost:34191 (size: 1850.0 B, free: 1919.7 MiB) 20:45:31.367 INFO SparkContext - Created broadcast 1363 from broadcast at ReadsSparkSink.java:133 20:45:31.368 INFO MemoryStore - Block broadcast_1364 stored as values in memory (estimated size 20.4 KiB, free 1918.9 MiB) 20:45:31.368 INFO MemoryStore - Block broadcast_1364_piece0 stored as bytes in memory (estimated size 1850.0 B, free 1918.9 MiB) 20:45:31.369 INFO BlockManagerInfo - Added broadcast_1364_piece0 in memory on localhost:34191 (size: 1850.0 B, free: 1919.7 MiB) 20:45:31.369 INFO SparkContext - Created broadcast 1364 from broadcast at BamSink.java:76 20:45:31.371 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 20:45:31.371 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:45:31.371 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:45:31.390 INFO SparkContext - Starting job: runJob at SparkHadoopWriter.scala:83 20:45:31.390 INFO DAGScheduler - Registering RDD 3480 (mapToPair at SparkUtils.java:161) as input to shuffle 101 20:45:31.390 INFO DAGScheduler - Got job 419 (runJob at SparkHadoopWriter.scala:83) with 1 output partitions 20:45:31.391 INFO DAGScheduler - Final stage: ResultStage 736 (runJob at SparkHadoopWriter.scala:83) 20:45:31.391 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 735) 20:45:31.391 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 735) 20:45:31.391 INFO DAGScheduler - Submitting ShuffleMapStage 735 (MapPartitionsRDD[3480] at mapToPair at SparkUtils.java:161), which has no missing parents 20:45:31.392 INFO MemoryStore - Block broadcast_1365 stored as values in memory (estimated size 18.1 KiB, free 1918.9 MiB) 20:45:31.393 INFO MemoryStore - Block broadcast_1365_piece0 stored as bytes in memory (estimated size 9.5 KiB, free 1918.9 MiB) 20:45:31.393 INFO BlockManagerInfo - Added broadcast_1365_piece0 in memory on localhost:34191 (size: 9.5 KiB, free: 1919.7 MiB) 20:45:31.393 INFO SparkContext - Created broadcast 1365 from broadcast at DAGScheduler.scala:1580 20:45:31.393 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 735 (MapPartitionsRDD[3480] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 20:45:31.393 INFO TaskSchedulerImpl - Adding task set 735.0 with 1 tasks resource profile 0 20:45:31.394 INFO TaskSetManager - Starting task 0.0 in stage 735.0 (TID 673) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 20:45:31.394 INFO Executor - Running task 0.0 in stage 735.0 (TID 673) 20:45:31.396 INFO ShuffleBlockFetcherIterator - Getting 1 (405.0 B) non-empty blocks including 1 (405.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.396 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.398 INFO ShuffleBlockFetcherIterator - Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.398 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.399 INFO Executor - Finished task 0.0 in stage 735.0 (TID 673). 1922 bytes result sent to driver 20:45:31.400 INFO TaskSetManager - Finished task 0.0 in stage 735.0 (TID 673) in 6 ms on localhost (executor driver) (1/1) 20:45:31.400 INFO TaskSchedulerImpl - Removed TaskSet 735.0, whose tasks have all completed, from pool 20:45:31.400 INFO DAGScheduler - ShuffleMapStage 735 (mapToPair at SparkUtils.java:161) finished in 0.009 s 20:45:31.400 INFO DAGScheduler - looking for newly runnable stages 20:45:31.400 INFO DAGScheduler - running: HashSet() 20:45:31.400 INFO DAGScheduler - waiting: HashSet(ResultStage 736) 20:45:31.400 INFO DAGScheduler - failed: HashSet() 20:45:31.400 INFO DAGScheduler - Submitting ResultStage 736 (MapPartitionsRDD[3485] at mapToPair at BamSink.java:91), which has no missing parents 20:45:31.407 INFO MemoryStore - Block broadcast_1366 stored as values in memory (estimated size 163.7 KiB, free 1918.7 MiB) 20:45:31.408 INFO MemoryStore - Block broadcast_1366_piece0 stored as bytes in memory (estimated size 73.9 KiB, free 1918.6 MiB) 20:45:31.408 INFO BlockManagerInfo - Added broadcast_1366_piece0 in memory on localhost:34191 (size: 73.9 KiB, free: 1919.6 MiB) 20:45:31.408 INFO SparkContext - Created broadcast 1366 from broadcast at DAGScheduler.scala:1580 20:45:31.408 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 736 (MapPartitionsRDD[3485] at mapToPair at BamSink.java:91) (first 15 tasks are for partitions Vector(0)) 20:45:31.408 INFO TaskSchedulerImpl - Adding task set 736.0 with 1 tasks resource profile 0 20:45:31.409 INFO TaskSetManager - Starting task 0.0 in stage 736.0 (TID 674) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 20:45:31.409 INFO Executor - Running task 0.0 in stage 736.0 (TID 674) 20:45:31.413 INFO ShuffleBlockFetcherIterator - Getting 1 (405.0 B) non-empty blocks including 1 (405.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.413 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.415 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 20:45:31.415 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:45:31.416 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:45:31.416 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 20:45:31.416 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:45:31.416 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:45:31.435 INFO FileOutputCommitter - Saved output of task 'attempt_20250612204531231181212762178004_3485_r_000000_0' to file:/tmp/local13085480587404488327/markdups13860296519650900937.bam.parts/_temporary/0/task_20250612204531231181212762178004_3485_r_000000 20:45:31.436 INFO SparkHadoopMapRedUtil - attempt_20250612204531231181212762178004_3485_r_000000_0: Committed. Elapsed time: 0 ms. 20:45:31.444 INFO Executor - Finished task 0.0 in stage 736.0 (TID 674). 1944 bytes result sent to driver 20:45:31.444 INFO BlockManagerInfo - Removed broadcast_1365_piece0 on localhost:34191 in memory (size: 9.5 KiB, free: 1919.6 MiB) 20:45:31.444 INFO TaskSetManager - Finished task 0.0 in stage 736.0 (TID 674) in 35 ms on localhost (executor driver) (1/1) 20:45:31.445 INFO TaskSchedulerImpl - Removed TaskSet 736.0, whose tasks have all completed, from pool 20:45:31.445 INFO DAGScheduler - ResultStage 736 (runJob at SparkHadoopWriter.scala:83) finished in 0.045 s 20:45:31.445 INFO DAGScheduler - Job 419 is finished. Cancelling potential speculative or zombie tasks for this job 20:45:31.445 INFO TaskSchedulerImpl - Killing all running tasks in stage 736: Stage finished 20:45:31.445 INFO DAGScheduler - Job 419 finished: runJob at SparkHadoopWriter.scala:83, took 0.054989 s 20:45:31.445 INFO SparkHadoopWriter - Start to commit write Job job_20250612204531231181212762178004_3485. 20:45:31.445 INFO BlockManagerInfo - Removed broadcast_1361_piece0 on localhost:34191 in memory (size: 9.7 KiB, free: 1919.6 MiB) 20:45:31.446 INFO BlockManagerInfo - Removed broadcast_1362_piece0 on localhost:34191 in memory (size: 10.6 KiB, free: 1919.7 MiB) 20:45:31.450 INFO SparkHadoopWriter - Write Job job_20250612204531231181212762178004_3485 committed. Elapsed time: 4 ms. 20:45:31.460 INFO HadoopFileSystemWrapper - Concatenating 3 parts to /tmp/local13085480587404488327/markdups13860296519650900937.bam 20:45:31.464 INFO HadoopFileSystemWrapper - Concatenating to /tmp/local13085480587404488327/markdups13860296519650900937.bam done 20:45:31.464 INFO IndexFileMerger - Merging .sbi files in temp directory /tmp/local13085480587404488327/markdups13860296519650900937.bam.parts/ to /tmp/local13085480587404488327/markdups13860296519650900937.bam.sbi 20:45:31.468 INFO IndexFileMerger - Done merging .sbi files 20:45:31.468 INFO IndexFileMerger - Merging .bai files in temp directory /tmp/local13085480587404488327/markdups13860296519650900937.bam.parts/ to /tmp/local13085480587404488327/markdups13860296519650900937.bam.bai 20:45:31.472 INFO IndexFileMerger - Done merging .bai files 20:45:31.472 INFO MarkDuplicatesSpark - Shutting down engine [June 12, 2025 at 8:45:31 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1254096896 20:45:31.480 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:45:31.480 INFO MarkDuplicatesSpark - org.broadinstitute.hellbender.tools.spark.transforms.markduplicates vUnavailable 20:45:31.480 INFO MarkDuplicatesSpark - For support and documentation go to https://software.broadinstitute.org/gatk/ 20:45:31.480 INFO MarkDuplicatesSpark - Executing as runner@pkrvmxyh4eaekms on Linux v6.11.0-1015-azure amd64 20:45:31.480 INFO MarkDuplicatesSpark - Java runtime: OpenJDK 64-Bit Server VM v17.0.6+10 20:45:31.480 INFO MarkDuplicatesSpark - Start Date/Time: June 12, 2025 at 8:45:31 PM UTC 20:45:31.480 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:45:31.480 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:45:31.480 INFO MarkDuplicatesSpark - HTSJDK Defaults.COMPRESSION_LEVEL : 2 20:45:31.480 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 20:45:31.480 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 20:45:31.480 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 20:45:31.480 INFO MarkDuplicatesSpark - Deflater: IntelDeflater 20:45:31.480 INFO MarkDuplicatesSpark - Inflater: IntelInflater 20:45:31.480 INFO MarkDuplicatesSpark - GCS max retries/reopens: 20 20:45:31.481 INFO MarkDuplicatesSpark - Requester pays: disabled 20:45:31.481 INFO MarkDuplicatesSpark - Initializing engine 20:45:31.481 INFO MarkDuplicatesSpark - Done initializing engine 20:45:31.481 INFO MarkDuplicatesSpark - Spark verbosity set to INFO (see --spark-verbosity argument) 20:45:31.483 INFO MemoryStore - Block broadcast_1367 stored as values in memory (estimated size 306.3 KiB, free 1918.4 MiB) 20:45:31.490 INFO MemoryStore - Block broadcast_1367_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1918.4 MiB) 20:45:31.490 INFO BlockManagerInfo - Added broadcast_1367_piece0 in memory on localhost:34191 (size: 64.4 KiB, free: 1919.6 MiB) 20:45:31.490 INFO SparkContext - Created broadcast 1367 from newAPIHadoopFile at PathSplitSource.java:96 20:45:31.511 INFO MemoryStore - Block broadcast_1368 stored as values in memory (estimated size 306.3 KiB, free 1918.1 MiB) 20:45:31.517 INFO MemoryStore - Block broadcast_1368_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1918.0 MiB) 20:45:31.518 INFO BlockManagerInfo - Added broadcast_1368_piece0 in memory on localhost:34191 (size: 64.4 KiB, free: 1919.5 MiB) 20:45:31.518 INFO SparkContext - Created broadcast 1368 from newAPIHadoopFile at PathSplitSource.java:96 20:45:31.551 INFO SparkContext - Starting job: collect at SparkUtils.java:205 20:45:31.552 INFO FileInputFormat - Total input files to process : 1 20:45:31.552 INFO DAGScheduler - Registering RDD 3499 (mapToPair at SparkUtils.java:161) as input to shuffle 102 20:45:31.552 INFO DAGScheduler - Got job 420 (collect at SparkUtils.java:205) with 1 output partitions 20:45:31.553 INFO DAGScheduler - Final stage: ResultStage 738 (collect at SparkUtils.java:205) 20:45:31.553 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 737) 20:45:31.553 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 737) 20:45:31.553 INFO DAGScheduler - Submitting ShuffleMapStage 737 (MapPartitionsRDD[3499] at mapToPair at SparkUtils.java:161), which has no missing parents 20:45:31.570 INFO MemoryStore - Block broadcast_1369 stored as values in memory (estimated size 460.4 KiB, free 1917.5 MiB) 20:45:31.572 INFO MemoryStore - Block broadcast_1369_piece0 stored as bytes in memory (estimated size 202.5 KiB, free 1917.4 MiB) 20:45:31.572 INFO BlockManagerInfo - Added broadcast_1369_piece0 in memory on localhost:34191 (size: 202.5 KiB, free: 1919.3 MiB) 20:45:31.572 INFO SparkContext - Created broadcast 1369 from broadcast at DAGScheduler.scala:1580 20:45:31.573 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 737 (MapPartitionsRDD[3499] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 20:45:31.573 INFO TaskSchedulerImpl - Adding task set 737.0 with 1 tasks resource profile 0 20:45:31.573 INFO TaskSetManager - Starting task 0.0 in stage 737.0 (TID 675) (localhost, executor driver, partition 0, PROCESS_LOCAL, 9736 bytes) 20:45:31.573 INFO Executor - Running task 0.0 in stage 737.0 (TID 675) 20:45:31.604 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/mdOrderBug2.bam:0+1856 20:45:31.609 INFO Executor - Finished task 0.0 in stage 737.0 (TID 675). 1148 bytes result sent to driver 20:45:31.609 INFO TaskSetManager - Finished task 0.0 in stage 737.0 (TID 675) in 36 ms on localhost (executor driver) (1/1) 20:45:31.609 INFO TaskSchedulerImpl - Removed TaskSet 737.0, whose tasks have all completed, from pool 20:45:31.609 INFO DAGScheduler - ShuffleMapStage 737 (mapToPair at SparkUtils.java:161) finished in 0.056 s 20:45:31.610 INFO DAGScheduler - looking for newly runnable stages 20:45:31.610 INFO DAGScheduler - running: HashSet() 20:45:31.610 INFO DAGScheduler - waiting: HashSet(ResultStage 738) 20:45:31.610 INFO DAGScheduler - failed: HashSet() 20:45:31.610 INFO DAGScheduler - Submitting ResultStage 738 (MapPartitionsRDD[3502] at mapPartitions at SparkUtils.java:188), which has no missing parents 20:45:31.611 INFO MemoryStore - Block broadcast_1370 stored as values in memory (estimated size 7.4 KiB, free 1917.3 MiB) 20:45:31.611 INFO MemoryStore - Block broadcast_1370_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1917.3 MiB) 20:45:31.611 INFO BlockManagerInfo - Added broadcast_1370_piece0 in memory on localhost:34191 (size: 4.1 KiB, free: 1919.3 MiB) 20:45:31.611 INFO SparkContext - Created broadcast 1370 from broadcast at DAGScheduler.scala:1580 20:45:31.612 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 738 (MapPartitionsRDD[3502] at mapPartitions at SparkUtils.java:188) (first 15 tasks are for partitions Vector(0)) 20:45:31.612 INFO TaskSchedulerImpl - Adding task set 738.0 with 1 tasks resource profile 0 20:45:31.612 INFO TaskSetManager - Starting task 0.0 in stage 738.0 (TID 676) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 20:45:31.612 INFO Executor - Running task 0.0 in stage 738.0 (TID 676) 20:45:31.614 INFO ShuffleBlockFetcherIterator - Getting 1 (868.0 B) non-empty blocks including 1 (868.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.614 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.616 INFO Executor - Finished task 0.0 in stage 738.0 (TID 676). 2495 bytes result sent to driver 20:45:31.617 INFO TaskSetManager - Finished task 0.0 in stage 738.0 (TID 676) in 4 ms on localhost (executor driver) (1/1) 20:45:31.617 INFO DAGScheduler - ResultStage 738 (collect at SparkUtils.java:205) finished in 0.007 s 20:45:31.617 INFO DAGScheduler - Job 420 is finished. Cancelling potential speculative or zombie tasks for this job 20:45:31.617 INFO TaskSchedulerImpl - Removed TaskSet 738.0, whose tasks have all completed, from pool 20:45:31.618 INFO TaskSchedulerImpl - Killing all running tasks in stage 738: Stage finished 20:45:31.618 INFO DAGScheduler - Job 420 finished: collect at SparkUtils.java:205, took 0.066691 s 20:45:31.624 INFO MemoryStore - Block broadcast_1371 stored as values in memory (estimated size 1632.0 B, free 1917.3 MiB) 20:45:31.631 INFO MemoryStore - Block broadcast_1371_piece0 stored as bytes in memory (estimated size 145.0 B, free 1917.3 MiB) 20:45:31.632 INFO BlockManagerInfo - Added broadcast_1371_piece0 in memory on localhost:34191 (size: 145.0 B, free: 1919.3 MiB) 20:45:31.632 INFO BlockManagerInfo - Removed broadcast_1370_piece0 on localhost:34191 in memory (size: 4.1 KiB, free: 1919.3 MiB) 20:45:31.632 INFO SparkContext - Created broadcast 1371 from broadcast at MarkDuplicatesSparkUtils.java:126 20:45:31.633 INFO MemoryStore - Block broadcast_1372 stored as values in memory (estimated size 304.0 B, free 1917.3 MiB) 20:45:31.633 INFO MemoryStore - Block broadcast_1372_piece0 stored as bytes in memory (estimated size 37.0 B, free 1917.3 MiB) 20:45:31.634 INFO BlockManagerInfo - Added broadcast_1372_piece0 in memory on localhost:34191 (size: 37.0 B, free: 1919.3 MiB) 20:45:31.634 INFO SparkContext - Created broadcast 1372 from broadcast at MarkDuplicatesSparkUtils.java:127 20:45:31.635 INFO BlockManagerInfo - Removed broadcast_1357_piece0 on localhost:34191 in memory (size: 145.0 B, free: 1919.3 MiB) 20:45:31.635 INFO BlockManagerInfo - Removed broadcast_1358_piece0 on localhost:34191 in memory (size: 37.0 B, free: 1919.3 MiB) 20:45:31.636 INFO BlockManagerInfo - Removed broadcast_1367_piece0 on localhost:34191 in memory (size: 64.4 KiB, free: 1919.4 MiB) 20:45:31.636 INFO BlockManagerInfo - Removed broadcast_1354_piece0 on localhost:34191 in memory (size: 64.4 KiB, free: 1919.5 MiB) 20:45:31.637 INFO BlockManagerInfo - Removed broadcast_1369_piece0 on localhost:34191 in memory (size: 202.5 KiB, free: 1919.7 MiB) 20:45:31.638 INFO BlockManagerInfo - Removed broadcast_1366_piece0 on localhost:34191 in memory (size: 73.9 KiB, free: 1919.7 MiB) 20:45:31.638 INFO BlockManagerInfo - Removed broadcast_1364_piece0 on localhost:34191 in memory (size: 1850.0 B, free: 1919.7 MiB) 20:45:31.638 INFO BlockManagerInfo - Removed broadcast_1363_piece0 on localhost:34191 in memory (size: 1850.0 B, free: 1919.7 MiB) 20:45:31.650 INFO SparkContext - Starting job: collectAsMap at MarkDuplicatesSparkUtils.java:542 20:45:31.651 INFO DAGScheduler - Registering RDD 3511 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) as input to shuffle 105 20:45:31.651 INFO DAGScheduler - Registering RDD 3515 (mapToPair at MarkDuplicatesSpark.java:217) as input to shuffle 104 20:45:31.651 INFO DAGScheduler - Registering RDD 3519 (mapToPair at MarkDuplicatesSparkUtils.java:497) as input to shuffle 103 20:45:31.652 INFO DAGScheduler - Got job 421 (collectAsMap at MarkDuplicatesSparkUtils.java:542) with 1 output partitions 20:45:31.652 INFO DAGScheduler - Final stage: ResultStage 743 (collectAsMap at MarkDuplicatesSparkUtils.java:542) 20:45:31.652 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 742) 20:45:31.652 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 742) 20:45:31.652 INFO DAGScheduler - Submitting ShuffleMapStage 740 (MapPartitionsRDD[3511] at flatMapToPair at MarkDuplicatesSparkUtils.java:130), which has no missing parents 20:45:31.653 INFO MemoryStore - Block broadcast_1373 stored as values in memory (estimated size 24.2 KiB, free 1919.0 MiB) 20:45:31.654 INFO MemoryStore - Block broadcast_1373_piece0 stored as bytes in memory (estimated size 12.6 KiB, free 1919.0 MiB) 20:45:31.654 INFO BlockManagerInfo - Added broadcast_1373_piece0 in memory on localhost:34191 (size: 12.6 KiB, free: 1919.7 MiB) 20:45:31.654 INFO SparkContext - Created broadcast 1373 from broadcast at DAGScheduler.scala:1580 20:45:31.654 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 740 (MapPartitionsRDD[3511] at flatMapToPair at MarkDuplicatesSparkUtils.java:130) (first 15 tasks are for partitions Vector(0)) 20:45:31.654 INFO TaskSchedulerImpl - Adding task set 740.0 with 1 tasks resource profile 0 20:45:31.656 INFO TaskSetManager - Starting task 0.0 in stage 740.0 (TID 677) (localhost, executor driver, partition 0, NODE_LOCAL, 9890 bytes) 20:45:31.656 INFO Executor - Running task 0.0 in stage 740.0 (TID 677) 20:45:31.658 INFO ShuffleBlockFetcherIterator - Getting 1 (868.0 B) non-empty blocks including 1 (868.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.658 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.661 INFO Executor - Finished task 0.0 in stage 740.0 (TID 677). 1922 bytes result sent to driver 20:45:31.661 INFO TaskSetManager - Finished task 0.0 in stage 740.0 (TID 677) in 6 ms on localhost (executor driver) (1/1) 20:45:31.661 INFO TaskSchedulerImpl - Removed TaskSet 740.0, whose tasks have all completed, from pool 20:45:31.662 INFO DAGScheduler - ShuffleMapStage 740 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) finished in 0.010 s 20:45:31.662 INFO DAGScheduler - looking for newly runnable stages 20:45:31.662 INFO DAGScheduler - running: HashSet() 20:45:31.662 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 741, ShuffleMapStage 742, ResultStage 743) 20:45:31.662 INFO DAGScheduler - failed: HashSet() 20:45:31.662 INFO DAGScheduler - Submitting ShuffleMapStage 741 (MapPartitionsRDD[3515] at mapToPair at MarkDuplicatesSpark.java:217), which has no missing parents 20:45:31.663 INFO MemoryStore - Block broadcast_1374 stored as values in memory (estimated size 28.3 KiB, free 1918.9 MiB) 20:45:31.664 INFO MemoryStore - Block broadcast_1374_piece0 stored as bytes in memory (estimated size 14.8 KiB, free 1918.9 MiB) 20:45:31.664 INFO BlockManagerInfo - Added broadcast_1374_piece0 in memory on localhost:34191 (size: 14.8 KiB, free: 1919.7 MiB) 20:45:31.664 INFO SparkContext - Created broadcast 1374 from broadcast at DAGScheduler.scala:1580 20:45:31.664 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 741 (MapPartitionsRDD[3515] at mapToPair at MarkDuplicatesSpark.java:217) (first 15 tasks are for partitions Vector(0)) 20:45:31.664 INFO TaskSchedulerImpl - Adding task set 741.0 with 1 tasks resource profile 0 20:45:31.665 INFO TaskSetManager - Starting task 0.0 in stage 741.0 (TID 678) (localhost, executor driver, partition 0, NODE_LOCAL, 9365 bytes) 20:45:31.665 INFO Executor - Running task 0.0 in stage 741.0 (TID 678) 20:45:31.666 INFO ShuffleBlockFetcherIterator - Getting 1 (142.0 B) non-empty blocks including 1 (142.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.667 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.671 INFO Executor - Finished task 0.0 in stage 741.0 (TID 678). 1922 bytes result sent to driver 20:45:31.672 INFO TaskSetManager - Finished task 0.0 in stage 741.0 (TID 678) in 7 ms on localhost (executor driver) (1/1) 20:45:31.672 INFO TaskSchedulerImpl - Removed TaskSet 741.0, whose tasks have all completed, from pool 20:45:31.672 INFO DAGScheduler - ShuffleMapStage 741 (mapToPair at MarkDuplicatesSpark.java:217) finished in 0.010 s 20:45:31.672 INFO DAGScheduler - looking for newly runnable stages 20:45:31.672 INFO DAGScheduler - running: HashSet() 20:45:31.672 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 742, ResultStage 743) 20:45:31.672 INFO DAGScheduler - failed: HashSet() 20:45:31.672 INFO DAGScheduler - Submitting ShuffleMapStage 742 (MapPartitionsRDD[3519] at mapToPair at MarkDuplicatesSparkUtils.java:497), which has no missing parents 20:45:31.673 INFO MemoryStore - Block broadcast_1375 stored as values in memory (estimated size 19.0 KiB, free 1918.9 MiB) 20:45:31.674 INFO MemoryStore - Block broadcast_1375_piece0 stored as bytes in memory (estimated size 9.7 KiB, free 1918.9 MiB) 20:45:31.674 INFO BlockManagerInfo - Added broadcast_1375_piece0 in memory on localhost:34191 (size: 9.7 KiB, free: 1919.7 MiB) 20:45:31.674 INFO SparkContext - Created broadcast 1375 from broadcast at DAGScheduler.scala:1580 20:45:31.674 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 742 (MapPartitionsRDD[3519] at mapToPair at MarkDuplicatesSparkUtils.java:497) (first 15 tasks are for partitions Vector(0)) 20:45:31.674 INFO TaskSchedulerImpl - Adding task set 742.0 with 1 tasks resource profile 0 20:45:31.675 INFO TaskSetManager - Starting task 0.0 in stage 742.0 (TID 679) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 20:45:31.675 INFO Executor - Running task 0.0 in stage 742.0 (TID 679) 20:45:31.677 INFO ShuffleBlockFetcherIterator - Getting 1 (868.0 B) non-empty blocks including 1 (868.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.677 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.680 INFO ShuffleBlockFetcherIterator - Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.680 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.682 INFO Executor - Finished task 0.0 in stage 742.0 (TID 679). 1922 bytes result sent to driver 20:45:31.682 INFO TaskSetManager - Finished task 0.0 in stage 742.0 (TID 679) in 7 ms on localhost (executor driver) (1/1) 20:45:31.682 INFO TaskSchedulerImpl - Removed TaskSet 742.0, whose tasks have all completed, from pool 20:45:31.682 INFO DAGScheduler - ShuffleMapStage 742 (mapToPair at MarkDuplicatesSparkUtils.java:497) finished in 0.010 s 20:45:31.682 INFO DAGScheduler - looking for newly runnable stages 20:45:31.682 INFO DAGScheduler - running: HashSet() 20:45:31.683 INFO DAGScheduler - waiting: HashSet(ResultStage 743) 20:45:31.683 INFO DAGScheduler - failed: HashSet() 20:45:31.683 INFO DAGScheduler - Submitting ResultStage 743 (MapPartitionsRDD[3521] at mapValues at MarkDuplicatesSparkUtils.java:519), which has no missing parents 20:45:31.684 INFO MemoryStore - Block broadcast_1376 stored as values in memory (estimated size 20.2 KiB, free 1918.9 MiB) 20:45:31.684 INFO MemoryStore - Block broadcast_1376_piece0 stored as bytes in memory (estimated size 10.6 KiB, free 1918.9 MiB) 20:45:31.684 INFO BlockManagerInfo - Added broadcast_1376_piece0 in memory on localhost:34191 (size: 10.6 KiB, free: 1919.7 MiB) 20:45:31.684 INFO SparkContext - Created broadcast 1376 from broadcast at DAGScheduler.scala:1580 20:45:31.685 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 743 (MapPartitionsRDD[3521] at mapValues at MarkDuplicatesSparkUtils.java:519) (first 15 tasks are for partitions Vector(0)) 20:45:31.685 INFO TaskSchedulerImpl - Adding task set 743.0 with 1 tasks resource profile 0 20:45:31.685 INFO TaskSetManager - Starting task 0.0 in stage 743.0 (TID 680) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 20:45:31.686 INFO Executor - Running task 0.0 in stage 743.0 (TID 680) 20:45:31.687 INFO ShuffleBlockFetcherIterator - Getting 1 (142.0 B) non-empty blocks including 1 (142.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.688 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.691 INFO Executor - Finished task 0.0 in stage 743.0 (TID 680). 2053 bytes result sent to driver 20:45:31.692 INFO TaskSetManager - Finished task 0.0 in stage 743.0 (TID 680) in 7 ms on localhost (executor driver) (1/1) 20:45:31.692 INFO TaskSchedulerImpl - Removed TaskSet 743.0, whose tasks have all completed, from pool 20:45:31.692 INFO DAGScheduler - ResultStage 743 (collectAsMap at MarkDuplicatesSparkUtils.java:542) finished in 0.009 s 20:45:31.692 INFO DAGScheduler - Job 421 is finished. Cancelling potential speculative or zombie tasks for this job 20:45:31.692 INFO TaskSchedulerImpl - Killing all running tasks in stage 743: Stage finished 20:45:31.692 INFO DAGScheduler - Job 421 finished: collectAsMap at MarkDuplicatesSparkUtils.java:542, took 0.041658 s 20:45:31.708 INFO MemoryStore - Block broadcast_1377 stored as values in memory (estimated size 20.3 KiB, free 1918.8 MiB) 20:45:31.709 INFO MemoryStore - Block broadcast_1377_piece0 stored as bytes in memory (estimated size 1850.0 B, free 1918.8 MiB) 20:45:31.709 INFO BlockManagerInfo - Added broadcast_1377_piece0 in memory on localhost:34191 (size: 1850.0 B, free: 1919.7 MiB) 20:45:31.709 INFO SparkContext - Created broadcast 1377 from broadcast at ReadsSparkSink.java:133 20:45:31.710 INFO MemoryStore - Block broadcast_1378 stored as values in memory (estimated size 20.4 KiB, free 1918.8 MiB) 20:45:31.711 INFO MemoryStore - Block broadcast_1378_piece0 stored as bytes in memory (estimated size 1850.0 B, free 1918.8 MiB) 20:45:31.711 INFO BlockManagerInfo - Added broadcast_1378_piece0 in memory on localhost:34191 (size: 1850.0 B, free: 1919.7 MiB) 20:45:31.711 INFO SparkContext - Created broadcast 1378 from broadcast at BamSink.java:76 20:45:31.713 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 20:45:31.713 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:45:31.713 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:45:31.738 INFO SparkContext - Starting job: runJob at SparkHadoopWriter.scala:83 20:45:31.739 INFO DAGScheduler - Registering RDD 3523 (mapToPair at SparkUtils.java:161) as input to shuffle 106 20:45:31.739 INFO DAGScheduler - Got job 422 (runJob at SparkHadoopWriter.scala:83) with 1 output partitions 20:45:31.739 INFO DAGScheduler - Final stage: ResultStage 748 (runJob at SparkHadoopWriter.scala:83) 20:45:31.739 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 747) 20:45:31.739 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 747) 20:45:31.739 INFO DAGScheduler - Submitting ShuffleMapStage 747 (MapPartitionsRDD[3523] at mapToPair at SparkUtils.java:161), which has no missing parents 20:45:31.740 INFO MemoryStore - Block broadcast_1379 stored as values in memory (estimated size 18.1 KiB, free 1918.8 MiB) 20:45:31.741 INFO MemoryStore - Block broadcast_1379_piece0 stored as bytes in memory (estimated size 9.5 KiB, free 1918.8 MiB) 20:45:31.741 INFO BlockManagerInfo - Added broadcast_1379_piece0 in memory on localhost:34191 (size: 9.5 KiB, free: 1919.7 MiB) 20:45:31.741 INFO SparkContext - Created broadcast 1379 from broadcast at DAGScheduler.scala:1580 20:45:31.741 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 747 (MapPartitionsRDD[3523] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 20:45:31.741 INFO TaskSchedulerImpl - Adding task set 747.0 with 1 tasks resource profile 0 20:45:31.742 INFO TaskSetManager - Starting task 0.0 in stage 747.0 (TID 681) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 20:45:31.743 INFO Executor - Running task 0.0 in stage 747.0 (TID 681) 20:45:31.745 INFO ShuffleBlockFetcherIterator - Getting 1 (868.0 B) non-empty blocks including 1 (868.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.745 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.749 INFO ShuffleBlockFetcherIterator - Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.749 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.750 INFO Executor - Finished task 0.0 in stage 747.0 (TID 681). 1922 bytes result sent to driver 20:45:31.751 INFO TaskSetManager - Finished task 0.0 in stage 747.0 (TID 681) in 9 ms on localhost (executor driver) (1/1) 20:45:31.751 INFO TaskSchedulerImpl - Removed TaskSet 747.0, whose tasks have all completed, from pool 20:45:31.751 INFO DAGScheduler - ShuffleMapStage 747 (mapToPair at SparkUtils.java:161) finished in 0.011 s 20:45:31.751 INFO DAGScheduler - looking for newly runnable stages 20:45:31.751 INFO DAGScheduler - running: HashSet() 20:45:31.751 INFO DAGScheduler - waiting: HashSet(ResultStage 748) 20:45:31.751 INFO DAGScheduler - failed: HashSet() 20:45:31.751 INFO DAGScheduler - Submitting ResultStage 748 (MapPartitionsRDD[3528] at mapToPair at BamSink.java:91), which has no missing parents 20:45:31.762 INFO MemoryStore - Block broadcast_1380 stored as values in memory (estimated size 163.7 KiB, free 1918.6 MiB) 20:45:31.763 INFO MemoryStore - Block broadcast_1380_piece0 stored as bytes in memory (estimated size 73.9 KiB, free 1918.6 MiB) 20:45:31.764 INFO BlockManagerInfo - Added broadcast_1380_piece0 in memory on localhost:34191 (size: 73.9 KiB, free: 1919.6 MiB) 20:45:31.764 INFO SparkContext - Created broadcast 1380 from broadcast at DAGScheduler.scala:1580 20:45:31.764 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 748 (MapPartitionsRDD[3528] at mapToPair at BamSink.java:91) (first 15 tasks are for partitions Vector(0)) 20:45:31.764 INFO TaskSchedulerImpl - Adding task set 748.0 with 1 tasks resource profile 0 20:45:31.765 INFO TaskSetManager - Starting task 0.0 in stage 748.0 (TID 682) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 20:45:31.765 INFO Executor - Running task 0.0 in stage 748.0 (TID 682) 20:45:31.772 INFO ShuffleBlockFetcherIterator - Getting 1 (868.0 B) non-empty blocks including 1 (868.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:31.772 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:31.776 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 20:45:31.776 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:45:31.776 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:45:31.776 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 20:45:31.776 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:45:31.776 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:45:31.801 INFO FileOutputCommitter - Saved output of task 'attempt_202506122045311625720344647598399_3528_r_000000_0' to file:/tmp/local13085480587404488327/markdups190703069564975446.bam.parts/_temporary/0/task_202506122045311625720344647598399_3528_r_000000 20:45:31.801 INFO SparkHadoopMapRedUtil - attempt_202506122045311625720344647598399_3528_r_000000_0: Committed. Elapsed time: 0 ms. 20:45:31.801 INFO Executor - Finished task 0.0 in stage 748.0 (TID 682). 1858 bytes result sent to driver 20:45:31.802 INFO TaskSetManager - Finished task 0.0 in stage 748.0 (TID 682) in 37 ms on localhost (executor driver) (1/1) 20:45:31.802 INFO TaskSchedulerImpl - Removed TaskSet 748.0, whose tasks have all completed, from pool 20:45:31.802 INFO DAGScheduler - ResultStage 748 (runJob at SparkHadoopWriter.scala:83) finished in 0.050 s 20:45:31.802 INFO DAGScheduler - Job 422 is finished. Cancelling potential speculative or zombie tasks for this job 20:45:31.802 INFO TaskSchedulerImpl - Killing all running tasks in stage 748: Stage finished 20:45:31.802 INFO DAGScheduler - Job 422 finished: runJob at SparkHadoopWriter.scala:83, took 0.064189 s 20:45:31.803 INFO SparkHadoopWriter - Start to commit write Job job_202506122045311625720344647598399_3528. 20:45:31.809 INFO SparkHadoopWriter - Write Job job_202506122045311625720344647598399_3528 committed. Elapsed time: 5 ms. 20:45:31.824 INFO HadoopFileSystemWrapper - Concatenating 3 parts to /tmp/local13085480587404488327/markdups190703069564975446.bam 20:45:31.830 INFO HadoopFileSystemWrapper - Concatenating to /tmp/local13085480587404488327/markdups190703069564975446.bam done 20:45:31.830 INFO IndexFileMerger - Merging .sbi files in temp directory /tmp/local13085480587404488327/markdups190703069564975446.bam.parts/ to /tmp/local13085480587404488327/markdups190703069564975446.bam.sbi 20:45:31.834 INFO IndexFileMerger - Done merging .sbi files 20:45:31.834 INFO IndexFileMerger - Merging .bai files in temp directory /tmp/local13085480587404488327/markdups190703069564975446.bam.parts/ to /tmp/local13085480587404488327/markdups190703069564975446.bam.bai 20:45:31.839 INFO IndexFileMerger - Done merging .bai files 20:45:31.839 INFO MarkDuplicatesSpark - Shutting down engine [June 12, 2025 at 8:45:31 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:32 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:32 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:33 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:33 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:33 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:34 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:35 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:36 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:37 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:37 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:37 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:38 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:38 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:39 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:41 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:41 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:41 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:42 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:42 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:42 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:44 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:45 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:45 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:46 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:46 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:46 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:47 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:48 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:49 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:49 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:50 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:50 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:51 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:51 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:52 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:53 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:54 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:55 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:57 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.03 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:57 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 20:45:57.574 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:45:57.574 INFO MarkDuplicatesSpark - org.broadinstitute.hellbender.tools.spark.transforms.markduplicates vUnavailable 20:45:57.574 INFO MarkDuplicatesSpark - For support and documentation go to https://software.broadinstitute.org/gatk/ 20:45:57.574 INFO MarkDuplicatesSpark - Executing as runner@pkrvmxyh4eaekms on Linux v6.11.0-1015-azure amd64 20:45:57.574 INFO MarkDuplicatesSpark - Java runtime: OpenJDK 64-Bit Server VM v17.0.6+10 20:45:57.574 INFO MarkDuplicatesSpark - Start Date/Time: June 12, 2025 at 8:45:57 PM UTC 20:45:57.574 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:45:57.574 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:45:57.574 INFO MarkDuplicatesSpark - HTSJDK Defaults.COMPRESSION_LEVEL : 2 20:45:57.574 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 20:45:57.574 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 20:45:57.574 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 20:45:57.574 INFO MarkDuplicatesSpark - Deflater: IntelDeflater 20:45:57.574 INFO MarkDuplicatesSpark - Inflater: IntelInflater 20:45:57.574 INFO MarkDuplicatesSpark - GCS max retries/reopens: 20 20:45:57.574 INFO MarkDuplicatesSpark - Requester pays: disabled 20:45:57.574 INFO MarkDuplicatesSpark - Initializing engine 20:45:57.574 INFO MarkDuplicatesSpark - Done initializing engine 20:45:57.574 INFO MarkDuplicatesSpark - Spark verbosity set to INFO (see --spark-verbosity argument) 20:45:57.577 INFO MemoryStore - Block broadcast_1957 stored as values in memory (estimated size 306.3 KiB, free 1916.7 MiB) 20:45:57.584 INFO MemoryStore - Block broadcast_1957_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1916.6 MiB) 20:45:57.584 INFO BlockManagerInfo - Added broadcast_1957_piece0 in memory on localhost:34191 (size: 64.4 KiB, free: 1919.2 MiB) 20:45:57.584 INFO SparkContext - Created broadcast 1957 from newAPIHadoopFile at PathSplitSource.java:96 20:45:57.605 INFO MemoryStore - Block broadcast_1958 stored as values in memory (estimated size 306.3 KiB, free 1916.3 MiB) 20:45:57.612 INFO MemoryStore - Block broadcast_1958_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1916.3 MiB) 20:45:57.612 INFO BlockManagerInfo - Added broadcast_1958_piece0 in memory on localhost:34191 (size: 64.4 KiB, free: 1919.1 MiB) 20:45:57.612 INFO SparkContext - Created broadcast 1958 from newAPIHadoopFile at PathSplitSource.java:96 20:45:57.645 INFO SparkContext - Starting job: collect at SparkUtils.java:205 20:45:57.646 INFO FileInputFormat - Total input files to process : 1 20:45:57.646 INFO DAGScheduler - Registering RDD 5420 (mapToPair at SparkUtils.java:161) as input to shuffle 297 20:45:57.646 INFO DAGScheduler - Got job 553 (collect at SparkUtils.java:205) with 1 output partitions 20:45:57.646 INFO DAGScheduler - Final stage: ResultStage 1210 (collect at SparkUtils.java:205) 20:45:57.646 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1209) 20:45:57.647 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1209) 20:45:57.647 INFO DAGScheduler - Submitting ShuffleMapStage 1209 (MapPartitionsRDD[5420] at mapToPair at SparkUtils.java:161), which has no missing parents 20:45:57.664 INFO MemoryStore - Block broadcast_1959 stored as values in memory (estimated size 493.8 KiB, free 1915.8 MiB) 20:45:57.666 INFO MemoryStore - Block broadcast_1959_piece0 stored as bytes in memory (estimated size 210.1 KiB, free 1915.6 MiB) 20:45:57.666 INFO BlockManagerInfo - Added broadcast_1959_piece0 in memory on localhost:34191 (size: 210.1 KiB, free: 1918.9 MiB) 20:45:57.666 INFO SparkContext - Created broadcast 1959 from broadcast at DAGScheduler.scala:1580 20:45:57.666 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1209 (MapPartitionsRDD[5420] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 20:45:57.667 INFO TaskSchedulerImpl - Adding task set 1209.0 with 1 tasks resource profile 0 20:45:57.667 INFO TaskSetManager - Starting task 0.0 in stage 1209.0 (TID 1053) (localhost, executor driver, partition 0, PROCESS_LOCAL, 9758 bytes) 20:45:57.667 INFO Executor - Running task 0.0 in stage 1209.0 (TID 1053) 20:45:57.698 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/example.chr1.1-1K.unmarkedDups.bam:0+13412 20:45:57.704 INFO Executor - Finished task 0.0 in stage 1209.0 (TID 1053). 1148 bytes result sent to driver 20:45:57.704 INFO TaskSetManager - Finished task 0.0 in stage 1209.0 (TID 1053) in 37 ms on localhost (executor driver) (1/1) 20:45:57.704 INFO TaskSchedulerImpl - Removed TaskSet 1209.0, whose tasks have all completed, from pool 20:45:57.704 INFO DAGScheduler - ShuffleMapStage 1209 (mapToPair at SparkUtils.java:161) finished in 0.057 s 20:45:57.704 INFO DAGScheduler - looking for newly runnable stages 20:45:57.704 INFO DAGScheduler - running: HashSet() 20:45:57.704 INFO DAGScheduler - waiting: HashSet(ResultStage 1210) 20:45:57.704 INFO DAGScheduler - failed: HashSet() 20:45:57.704 INFO DAGScheduler - Submitting ResultStage 1210 (MapPartitionsRDD[5423] at mapPartitions at SparkUtils.java:188), which has no missing parents 20:45:57.705 INFO MemoryStore - Block broadcast_1960 stored as values in memory (estimated size 7.4 KiB, free 1915.6 MiB) 20:45:57.705 INFO MemoryStore - Block broadcast_1960_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1915.6 MiB) 20:45:57.705 INFO BlockManagerInfo - Added broadcast_1960_piece0 in memory on localhost:34191 (size: 4.1 KiB, free: 1918.9 MiB) 20:45:57.706 INFO SparkContext - Created broadcast 1960 from broadcast at DAGScheduler.scala:1580 20:45:57.706 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1210 (MapPartitionsRDD[5423] at mapPartitions at SparkUtils.java:188) (first 15 tasks are for partitions Vector(0)) 20:45:57.706 INFO TaskSchedulerImpl - Adding task set 1210.0 with 1 tasks resource profile 0 20:45:57.706 INFO TaskSetManager - Starting task 0.0 in stage 1210.0 (TID 1054) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 20:45:57.706 INFO Executor - Running task 0.0 in stage 1210.0 (TID 1054) 20:45:57.708 INFO ShuffleBlockFetcherIterator - Getting 1 (16.3 KiB) non-empty blocks including 1 (16.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:57.708 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:57.711 INFO Executor - Finished task 0.0 in stage 1210.0 (TID 1054). 2329 bytes result sent to driver 20:45:57.712 INFO TaskSetManager - Finished task 0.0 in stage 1210.0 (TID 1054) in 6 ms on localhost (executor driver) (1/1) 20:45:57.712 INFO TaskSchedulerImpl - Removed TaskSet 1210.0, whose tasks have all completed, from pool 20:45:57.712 INFO DAGScheduler - ResultStage 1210 (collect at SparkUtils.java:205) finished in 0.007 s 20:45:57.712 INFO DAGScheduler - Job 553 is finished. Cancelling potential speculative or zombie tasks for this job 20:45:57.712 INFO TaskSchedulerImpl - Killing all running tasks in stage 1210: Stage finished 20:45:57.712 INFO DAGScheduler - Job 553 finished: collect at SparkUtils.java:205, took 0.066537 s 20:45:57.716 INFO MemoryStore - Block broadcast_1961 stored as values in memory (estimated size 7.8 KiB, free 1915.6 MiB) 20:45:57.717 INFO MemoryStore - Block broadcast_1961_piece0 stored as bytes in memory (estimated size 540.0 B, free 1915.6 MiB) 20:45:57.717 INFO BlockManagerInfo - Added broadcast_1961_piece0 in memory on localhost:34191 (size: 540.0 B, free: 1918.9 MiB) 20:45:57.717 INFO SparkContext - Created broadcast 1961 from broadcast at MarkDuplicatesSparkUtils.java:126 20:45:57.717 INFO MemoryStore - Block broadcast_1962 stored as values in memory (estimated size 2.6 KiB, free 1915.6 MiB) 20:45:57.718 INFO MemoryStore - Block broadcast_1962_piece0 stored as bytes in memory (estimated size 209.0 B, free 1915.6 MiB) 20:45:57.718 INFO BlockManagerInfo - Added broadcast_1962_piece0 in memory on localhost:34191 (size: 209.0 B, free: 1918.9 MiB) 20:45:57.718 INFO SparkContext - Created broadcast 1962 from broadcast at MarkDuplicatesSparkUtils.java:127 20:45:57.732 INFO SparkContext - Starting job: collectAsMap at MarkDuplicatesSparkUtils.java:542 20:45:57.732 INFO DAGScheduler - Registering RDD 5432 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) as input to shuffle 300 20:45:57.732 INFO DAGScheduler - Registering RDD 5436 (mapToPair at MarkDuplicatesSpark.java:217) as input to shuffle 299 20:45:57.732 INFO DAGScheduler - Registering RDD 5440 (mapToPair at MarkDuplicatesSparkUtils.java:497) as input to shuffle 298 20:45:57.733 INFO DAGScheduler - Got job 554 (collectAsMap at MarkDuplicatesSparkUtils.java:542) with 1 output partitions 20:45:57.733 INFO DAGScheduler - Final stage: ResultStage 1215 (collectAsMap at MarkDuplicatesSparkUtils.java:542) 20:45:57.733 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1214) 20:45:57.733 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1214) 20:45:57.733 INFO DAGScheduler - Submitting ShuffleMapStage 1212 (MapPartitionsRDD[5432] at flatMapToPair at MarkDuplicatesSparkUtils.java:130), which has no missing parents 20:45:57.734 INFO MemoryStore - Block broadcast_1963 stored as values in memory (estimated size 57.6 KiB, free 1915.5 MiB) 20:45:57.734 INFO MemoryStore - Block broadcast_1963_piece0 stored as bytes in memory (estimated size 19.8 KiB, free 1915.5 MiB) 20:45:57.735 INFO BlockManagerInfo - Added broadcast_1963_piece0 in memory on localhost:34191 (size: 19.8 KiB, free: 1918.9 MiB) 20:45:57.735 INFO SparkContext - Created broadcast 1963 from broadcast at DAGScheduler.scala:1580 20:45:57.735 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1212 (MapPartitionsRDD[5432] at flatMapToPair at MarkDuplicatesSparkUtils.java:130) (first 15 tasks are for partitions Vector(0)) 20:45:57.735 INFO TaskSchedulerImpl - Adding task set 1212.0 with 1 tasks resource profile 0 20:45:57.736 INFO TaskSetManager - Starting task 0.0 in stage 1212.0 (TID 1055) (localhost, executor driver, partition 0, NODE_LOCAL, 9890 bytes) 20:45:57.736 INFO Executor - Running task 0.0 in stage 1212.0 (TID 1055) 20:45:57.738 INFO ShuffleBlockFetcherIterator - Getting 1 (16.3 KiB) non-empty blocks including 1 (16.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:57.738 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:57.741 INFO Executor - Finished task 0.0 in stage 1212.0 (TID 1055). 1922 bytes result sent to driver 20:45:57.742 INFO TaskSetManager - Finished task 0.0 in stage 1212.0 (TID 1055) in 7 ms on localhost (executor driver) (1/1) 20:45:57.742 INFO TaskSchedulerImpl - Removed TaskSet 1212.0, whose tasks have all completed, from pool 20:45:57.742 INFO DAGScheduler - ShuffleMapStage 1212 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) finished in 0.009 s 20:45:57.742 INFO DAGScheduler - looking for newly runnable stages 20:45:57.742 INFO DAGScheduler - running: HashSet() 20:45:57.742 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1213, ShuffleMapStage 1214, ResultStage 1215) 20:45:57.742 INFO DAGScheduler - failed: HashSet() 20:45:57.742 INFO DAGScheduler - Submitting ShuffleMapStage 1213 (MapPartitionsRDD[5436] at mapToPair at MarkDuplicatesSpark.java:217), which has no missing parents 20:45:57.743 INFO MemoryStore - Block broadcast_1964 stored as values in memory (estimated size 61.7 KiB, free 1915.4 MiB) 20:45:57.744 INFO MemoryStore - Block broadcast_1964_piece0 stored as bytes in memory (estimated size 22.0 KiB, free 1915.4 MiB) 20:45:57.744 INFO BlockManagerInfo - Added broadcast_1964_piece0 in memory on localhost:34191 (size: 22.0 KiB, free: 1918.8 MiB) 20:45:57.744 INFO SparkContext - Created broadcast 1964 from broadcast at DAGScheduler.scala:1580 20:45:57.744 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1213 (MapPartitionsRDD[5436] at mapToPair at MarkDuplicatesSpark.java:217) (first 15 tasks are for partitions Vector(0)) 20:45:57.744 INFO TaskSchedulerImpl - Adding task set 1213.0 with 1 tasks resource profile 0 20:45:57.744 INFO TaskSetManager - Starting task 0.0 in stage 1213.0 (TID 1056) (localhost, executor driver, partition 0, NODE_LOCAL, 9365 bytes) 20:45:57.744 INFO Executor - Running task 0.0 in stage 1213.0 (TID 1056) 20:45:57.746 INFO ShuffleBlockFetcherIterator - Getting 1 (2.0 KiB) non-empty blocks including 1 (2.0 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:57.746 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:57.751 INFO Executor - Finished task 0.0 in stage 1213.0 (TID 1056). 1922 bytes result sent to driver 20:45:57.752 INFO TaskSetManager - Finished task 0.0 in stage 1213.0 (TID 1056) in 8 ms on localhost (executor driver) (1/1) 20:45:57.752 INFO TaskSchedulerImpl - Removed TaskSet 1213.0, whose tasks have all completed, from pool 20:45:57.752 INFO DAGScheduler - ShuffleMapStage 1213 (mapToPair at MarkDuplicatesSpark.java:217) finished in 0.010 s 20:45:57.752 INFO DAGScheduler - looking for newly runnable stages 20:45:57.752 INFO DAGScheduler - running: HashSet() 20:45:57.752 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1214, ResultStage 1215) 20:45:57.752 INFO DAGScheduler - failed: HashSet() 20:45:57.752 INFO DAGScheduler - Submitting ShuffleMapStage 1214 (MapPartitionsRDD[5440] at mapToPair at MarkDuplicatesSparkUtils.java:497), which has no missing parents 20:45:57.753 INFO MemoryStore - Block broadcast_1965 stored as values in memory (estimated size 52.4 KiB, free 1915.4 MiB) 20:45:57.754 INFO MemoryStore - Block broadcast_1965_piece0 stored as bytes in memory (estimated size 17.3 KiB, free 1915.3 MiB) 20:45:57.754 INFO BlockManagerInfo - Added broadcast_1965_piece0 in memory on localhost:34191 (size: 17.3 KiB, free: 1918.8 MiB) 20:45:57.754 INFO SparkContext - Created broadcast 1965 from broadcast at DAGScheduler.scala:1580 20:45:57.754 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1214 (MapPartitionsRDD[5440] at mapToPair at MarkDuplicatesSparkUtils.java:497) (first 15 tasks are for partitions Vector(0)) 20:45:57.754 INFO TaskSchedulerImpl - Adding task set 1214.0 with 1 tasks resource profile 0 20:45:57.755 INFO TaskSetManager - Starting task 0.0 in stage 1214.0 (TID 1057) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 20:45:57.755 INFO Executor - Running task 0.0 in stage 1214.0 (TID 1057) 20:45:57.756 INFO ShuffleBlockFetcherIterator - Getting 1 (16.3 KiB) non-empty blocks including 1 (16.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:57.756 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:57.759 INFO ShuffleBlockFetcherIterator - Getting 1 (2.7 KiB) non-empty blocks including 1 (2.7 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:57.759 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:57.761 INFO Executor - Finished task 0.0 in stage 1214.0 (TID 1057). 1922 bytes result sent to driver 20:45:57.761 INFO TaskSetManager - Finished task 0.0 in stage 1214.0 (TID 1057) in 7 ms on localhost (executor driver) (1/1) 20:45:57.761 INFO TaskSchedulerImpl - Removed TaskSet 1214.0, whose tasks have all completed, from pool 20:45:57.761 INFO DAGScheduler - ShuffleMapStage 1214 (mapToPair at MarkDuplicatesSparkUtils.java:497) finished in 0.009 s 20:45:57.761 INFO DAGScheduler - looking for newly runnable stages 20:45:57.761 INFO DAGScheduler - running: HashSet() 20:45:57.761 INFO DAGScheduler - waiting: HashSet(ResultStage 1215) 20:45:57.761 INFO DAGScheduler - failed: HashSet() 20:45:57.761 INFO DAGScheduler - Submitting ResultStage 1215 (MapPartitionsRDD[5442] at mapValues at MarkDuplicatesSparkUtils.java:519), which has no missing parents 20:45:57.762 INFO MemoryStore - Block broadcast_1966 stored as values in memory (estimated size 53.6 KiB, free 1915.3 MiB) 20:45:57.763 INFO MemoryStore - Block broadcast_1966_piece0 stored as bytes in memory (estimated size 18.0 KiB, free 1915.3 MiB) 20:45:57.763 INFO BlockManagerInfo - Added broadcast_1966_piece0 in memory on localhost:34191 (size: 18.0 KiB, free: 1918.8 MiB) 20:45:57.763 INFO SparkContext - Created broadcast 1966 from broadcast at DAGScheduler.scala:1580 20:45:57.764 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1215 (MapPartitionsRDD[5442] at mapValues at MarkDuplicatesSparkUtils.java:519) (first 15 tasks are for partitions Vector(0)) 20:45:57.764 INFO TaskSchedulerImpl - Adding task set 1215.0 with 1 tasks resource profile 0 20:45:57.764 INFO TaskSetManager - Starting task 0.0 in stage 1215.0 (TID 1058) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 20:45:57.764 INFO Executor - Running task 0.0 in stage 1215.0 (TID 1058) 20:45:57.766 INFO ShuffleBlockFetcherIterator - Getting 1 (251.0 B) non-empty blocks including 1 (251.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:57.766 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:57.767 INFO Executor - Finished task 0.0 in stage 1215.0 (TID 1058). 2371 bytes result sent to driver 20:45:57.767 INFO TaskSetManager - Finished task 0.0 in stage 1215.0 (TID 1058) in 3 ms on localhost (executor driver) (1/1) 20:45:57.768 INFO TaskSchedulerImpl - Removed TaskSet 1215.0, whose tasks have all completed, from pool 20:45:57.768 INFO DAGScheduler - ResultStage 1215 (collectAsMap at MarkDuplicatesSparkUtils.java:542) finished in 0.006 s 20:45:57.768 INFO DAGScheduler - Job 554 is finished. Cancelling potential speculative or zombie tasks for this job 20:45:57.768 INFO TaskSchedulerImpl - Killing all running tasks in stage 1215: Stage finished 20:45:57.768 INFO DAGScheduler - Job 554 finished: collectAsMap at MarkDuplicatesSparkUtils.java:542, took 0.036063 s 20:45:57.780 INFO MemoryStore - Block broadcast_1967 stored as values in memory (estimated size 179.4 KiB, free 1915.1 MiB) 20:45:57.780 INFO MemoryStore - Block broadcast_1967_piece0 stored as bytes in memory (estimated size 7.9 KiB, free 1915.1 MiB) 20:45:57.781 INFO BlockManagerInfo - Added broadcast_1967_piece0 in memory on localhost:34191 (size: 7.9 KiB, free: 1918.8 MiB) 20:45:57.781 INFO SparkContext - Created broadcast 1967 from broadcast at ReadsSparkSink.java:133 20:45:57.784 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 20:45:57.784 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:45:57.784 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:45:57.801 INFO SparkContext - Starting job: runJob at SparkHadoopWriter.scala:83 20:45:57.802 INFO DAGScheduler - Registering RDD 5444 (mapToPair at SparkUtils.java:161) as input to shuffle 301 20:45:57.802 INFO DAGScheduler - Got job 555 (runJob at SparkHadoopWriter.scala:83) with 1 output partitions 20:45:57.802 INFO DAGScheduler - Final stage: ResultStage 1220 (runJob at SparkHadoopWriter.scala:83) 20:45:57.802 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1219) 20:45:57.802 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1219) 20:45:57.802 INFO DAGScheduler - Submitting ShuffleMapStage 1219 (MapPartitionsRDD[5444] at mapToPair at SparkUtils.java:161), which has no missing parents 20:45:57.803 INFO MemoryStore - Block broadcast_1968 stored as values in memory (estimated size 51.5 KiB, free 1915.0 MiB) 20:45:57.804 INFO MemoryStore - Block broadcast_1968_piece0 stored as bytes in memory (estimated size 16.8 KiB, free 1915.0 MiB) 20:45:57.804 INFO BlockManagerInfo - Added broadcast_1968_piece0 in memory on localhost:34191 (size: 16.8 KiB, free: 1918.8 MiB) 20:45:57.804 INFO SparkContext - Created broadcast 1968 from broadcast at DAGScheduler.scala:1580 20:45:57.805 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1219 (MapPartitionsRDD[5444] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 20:45:57.805 INFO TaskSchedulerImpl - Adding task set 1219.0 with 1 tasks resource profile 0 20:45:57.805 INFO TaskSetManager - Starting task 0.0 in stage 1219.0 (TID 1059) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 20:45:57.805 INFO Executor - Running task 0.0 in stage 1219.0 (TID 1059) 20:45:57.807 INFO ShuffleBlockFetcherIterator - Getting 1 (16.3 KiB) non-empty blocks including 1 (16.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:57.807 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:57.810 INFO ShuffleBlockFetcherIterator - Getting 1 (2.7 KiB) non-empty blocks including 1 (2.7 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:57.810 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:57.811 INFO Executor - Finished task 0.0 in stage 1219.0 (TID 1059). 1922 bytes result sent to driver 20:45:57.812 INFO TaskSetManager - Finished task 0.0 in stage 1219.0 (TID 1059) in 7 ms on localhost (executor driver) (1/1) 20:45:57.812 INFO TaskSchedulerImpl - Removed TaskSet 1219.0, whose tasks have all completed, from pool 20:45:57.812 INFO DAGScheduler - ShuffleMapStage 1219 (mapToPair at SparkUtils.java:161) finished in 0.010 s 20:45:57.812 INFO DAGScheduler - looking for newly runnable stages 20:45:57.812 INFO DAGScheduler - running: HashSet() 20:45:57.812 INFO DAGScheduler - waiting: HashSet(ResultStage 1220) 20:45:57.812 INFO DAGScheduler - failed: HashSet() 20:45:57.812 INFO DAGScheduler - Submitting ResultStage 1220 (MapPartitionsRDD[5450] at saveAsTextFile at SamSink.java:65), which has no missing parents 20:45:57.819 INFO MemoryStore - Block broadcast_1969 stored as values in memory (estimated size 196.7 KiB, free 1914.8 MiB) 20:45:57.820 INFO MemoryStore - Block broadcast_1969_piece0 stored as bytes in memory (estimated size 81.0 KiB, free 1914.8 MiB) 20:45:57.821 INFO BlockManagerInfo - Added broadcast_1969_piece0 in memory on localhost:34191 (size: 81.0 KiB, free: 1918.7 MiB) 20:45:57.821 INFO SparkContext - Created broadcast 1969 from broadcast at DAGScheduler.scala:1580 20:45:57.821 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1220 (MapPartitionsRDD[5450] at saveAsTextFile at SamSink.java:65) (first 15 tasks are for partitions Vector(0)) 20:45:57.821 INFO TaskSchedulerImpl - Adding task set 1220.0 with 1 tasks resource profile 0 20:45:57.821 INFO TaskSetManager - Starting task 0.0 in stage 1220.0 (TID 1060) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 20:45:57.822 INFO Executor - Running task 0.0 in stage 1220.0 (TID 1060) 20:45:57.825 INFO ShuffleBlockFetcherIterator - Getting 1 (14.8 KiB) non-empty blocks including 1 (14.8 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:57.826 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:57.828 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 20:45:57.828 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:45:57.828 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:45:57.840 INFO FileOutputCommitter - Saved output of task 'attempt_202506122045576300462987537509262_5450_m_000000_0' to file:/tmp/MarkDups10112326201710382841/MarkDups.sam.parts/_temporary/0/task_202506122045576300462987537509262_5450_m_000000 20:45:57.840 INFO SparkHadoopMapRedUtil - attempt_202506122045576300462987537509262_5450_m_000000_0: Committed. Elapsed time: 0 ms. 20:45:57.840 INFO Executor - Finished task 0.0 in stage 1220.0 (TID 1060). 1858 bytes result sent to driver 20:45:57.841 INFO TaskSetManager - Finished task 0.0 in stage 1220.0 (TID 1060) in 20 ms on localhost (executor driver) (1/1) 20:45:57.841 INFO TaskSchedulerImpl - Removed TaskSet 1220.0, whose tasks have all completed, from pool 20:45:57.841 INFO DAGScheduler - ResultStage 1220 (runJob at SparkHadoopWriter.scala:83) finished in 0.028 s 20:45:57.841 INFO DAGScheduler - Job 555 is finished. Cancelling potential speculative or zombie tasks for this job 20:45:57.841 INFO TaskSchedulerImpl - Killing all running tasks in stage 1220: Stage finished 20:45:57.841 INFO DAGScheduler - Job 555 finished: runJob at SparkHadoopWriter.scala:83, took 0.039646 s 20:45:57.841 INFO SparkHadoopWriter - Start to commit write Job job_202506122045576300462987537509262_5450. 20:45:57.845 INFO SparkHadoopWriter - Write Job job_202506122045576300462987537509262_5450 committed. Elapsed time: 3 ms. 20:45:57.852 INFO HadoopFileSystemWrapper - Concatenating 2 parts to /tmp/MarkDups10112326201710382841/MarkDups.sam 20:45:57.856 INFO HadoopFileSystemWrapper - Concatenating to /tmp/MarkDups10112326201710382841/MarkDups.sam done 20:45:57.857 INFO MarkDuplicatesSpark - Shutting down engine [June 12, 2025 at 8:45:57 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 20:45:57.865 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:45:57.865 INFO MarkDuplicatesSpark - org.broadinstitute.hellbender.tools.spark.transforms.markduplicates vUnavailable 20:45:57.865 INFO MarkDuplicatesSpark - For support and documentation go to https://software.broadinstitute.org/gatk/ 20:45:57.865 INFO MarkDuplicatesSpark - Executing as runner@pkrvmxyh4eaekms on Linux v6.11.0-1015-azure amd64 20:45:57.865 INFO MarkDuplicatesSpark - Java runtime: OpenJDK 64-Bit Server VM v17.0.6+10 20:45:57.865 INFO MarkDuplicatesSpark - Start Date/Time: June 12, 2025 at 8:45:57 PM UTC 20:45:57.865 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:45:57.865 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:45:57.865 INFO MarkDuplicatesSpark - HTSJDK Defaults.COMPRESSION_LEVEL : 2 20:45:57.865 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 20:45:57.865 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 20:45:57.865 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 20:45:57.865 INFO MarkDuplicatesSpark - Deflater: IntelDeflater 20:45:57.865 INFO MarkDuplicatesSpark - Inflater: IntelInflater 20:45:57.865 INFO MarkDuplicatesSpark - GCS max retries/reopens: 20 20:45:57.865 INFO MarkDuplicatesSpark - Requester pays: disabled 20:45:57.865 INFO MarkDuplicatesSpark - Initializing engine 20:45:57.865 INFO MarkDuplicatesSpark - Done initializing engine 20:45:57.865 INFO MarkDuplicatesSpark - Spark verbosity set to INFO (see --spark-verbosity argument) 20:45:57.868 INFO MemoryStore - Block broadcast_1970 stored as values in memory (estimated size 306.3 KiB, free 1914.5 MiB) 20:45:57.879 INFO MemoryStore - Block broadcast_1970_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1914.4 MiB) 20:45:57.879 INFO BlockManagerInfo - Added broadcast_1970_piece0 in memory on localhost:34191 (size: 64.4 KiB, free: 1918.6 MiB) 20:45:57.879 INFO SparkContext - Created broadcast 1970 from newAPIHadoopFile at PathSplitSource.java:96 20:45:57.914 INFO MemoryStore - Block broadcast_1971 stored as values in memory (estimated size 306.3 KiB, free 1914.1 MiB) 20:45:57.921 INFO MemoryStore - Block broadcast_1971_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1914.0 MiB) 20:45:57.921 INFO BlockManagerInfo - Added broadcast_1971_piece0 in memory on localhost:34191 (size: 64.4 KiB, free: 1918.6 MiB) 20:45:57.921 INFO SparkContext - Created broadcast 1971 from newAPIHadoopFile at PathSplitSource.java:96 20:45:57.954 INFO SparkContext - Starting job: collect at SparkUtils.java:205 20:45:57.955 INFO FileInputFormat - Total input files to process : 1 20:45:57.955 INFO DAGScheduler - Registering RDD 5464 (mapToPair at SparkUtils.java:161) as input to shuffle 302 20:45:57.955 INFO DAGScheduler - Got job 556 (collect at SparkUtils.java:205) with 1 output partitions 20:45:57.955 INFO DAGScheduler - Final stage: ResultStage 1222 (collect at SparkUtils.java:205) 20:45:57.956 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1221) 20:45:57.956 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1221) 20:45:57.956 INFO DAGScheduler - Submitting ShuffleMapStage 1221 (MapPartitionsRDD[5464] at mapToPair at SparkUtils.java:161), which has no missing parents 20:45:57.972 INFO MemoryStore - Block broadcast_1972 stored as values in memory (estimated size 457.4 KiB, free 1913.6 MiB) 20:45:57.975 INFO MemoryStore - Block broadcast_1972_piece0 stored as bytes in memory (estimated size 201.1 KiB, free 1913.4 MiB) 20:45:57.975 INFO BlockManagerInfo - Added broadcast_1972_piece0 in memory on localhost:34191 (size: 201.1 KiB, free: 1918.4 MiB) 20:45:57.975 INFO SparkContext - Created broadcast 1972 from broadcast at DAGScheduler.scala:1580 20:45:57.975 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1221 (MapPartitionsRDD[5464] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 20:45:57.975 INFO TaskSchedulerImpl - Adding task set 1221.0 with 1 tasks resource profile 0 20:45:57.976 INFO TaskSetManager - Starting task 0.0 in stage 1221.0 (TID 1061) (localhost, executor driver, partition 0, PROCESS_LOCAL, 9738 bytes) 20:45:57.976 INFO Executor - Running task 0.0 in stage 1221.0 (TID 1061) 20:45:58.006 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/optical_dupes.bam:0+678 20:45:58.010 INFO Executor - Finished task 0.0 in stage 1221.0 (TID 1061). 1148 bytes result sent to driver 20:45:58.010 INFO TaskSetManager - Finished task 0.0 in stage 1221.0 (TID 1061) in 34 ms on localhost (executor driver) (1/1) 20:45:58.010 INFO TaskSchedulerImpl - Removed TaskSet 1221.0, whose tasks have all completed, from pool 20:45:58.010 INFO DAGScheduler - ShuffleMapStage 1221 (mapToPair at SparkUtils.java:161) finished in 0.054 s 20:45:58.010 INFO DAGScheduler - looking for newly runnable stages 20:45:58.010 INFO DAGScheduler - running: HashSet() 20:45:58.010 INFO DAGScheduler - waiting: HashSet(ResultStage 1222) 20:45:58.010 INFO DAGScheduler - failed: HashSet() 20:45:58.010 INFO DAGScheduler - Submitting ResultStage 1222 (MapPartitionsRDD[5467] at mapPartitions at SparkUtils.java:188), which has no missing parents 20:45:58.011 INFO MemoryStore - Block broadcast_1973 stored as values in memory (estimated size 7.4 KiB, free 1913.4 MiB) 20:45:58.011 INFO MemoryStore - Block broadcast_1973_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1913.4 MiB) 20:45:58.011 INFO BlockManagerInfo - Added broadcast_1973_piece0 in memory on localhost:34191 (size: 4.1 KiB, free: 1918.4 MiB) 20:45:58.012 INFO SparkContext - Created broadcast 1973 from broadcast at DAGScheduler.scala:1580 20:45:58.012 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1222 (MapPartitionsRDD[5467] at mapPartitions at SparkUtils.java:188) (first 15 tasks are for partitions Vector(0)) 20:45:58.012 INFO TaskSchedulerImpl - Adding task set 1222.0 with 1 tasks resource profile 0 20:45:58.012 INFO TaskSetManager - Starting task 0.0 in stage 1222.0 (TID 1062) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 20:45:58.012 INFO Executor - Running task 0.0 in stage 1222.0 (TID 1062) 20:45:58.013 INFO ShuffleBlockFetcherIterator - Getting 1 (490.0 B) non-empty blocks including 1 (490.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.013 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.014 INFO Executor - Finished task 0.0 in stage 1222.0 (TID 1062). 2269 bytes result sent to driver 20:45:58.015 INFO TaskSetManager - Finished task 0.0 in stage 1222.0 (TID 1062) in 3 ms on localhost (executor driver) (1/1) 20:45:58.015 INFO TaskSchedulerImpl - Removed TaskSet 1222.0, whose tasks have all completed, from pool 20:45:58.015 INFO DAGScheduler - ResultStage 1222 (collect at SparkUtils.java:205) finished in 0.004 s 20:45:58.015 INFO DAGScheduler - Job 556 is finished. Cancelling potential speculative or zombie tasks for this job 20:45:58.015 INFO TaskSchedulerImpl - Killing all running tasks in stage 1222: Stage finished 20:45:58.015 INFO DAGScheduler - Job 556 finished: collect at SparkUtils.java:205, took 0.060553 s 20:45:58.019 INFO MemoryStore - Block broadcast_1974 stored as values in memory (estimated size 136.0 B, free 1913.4 MiB) 20:45:58.020 INFO MemoryStore - Block broadcast_1974_piece0 stored as bytes in memory (estimated size 24.0 B, free 1913.4 MiB) 20:45:58.020 INFO BlockManagerInfo - Added broadcast_1974_piece0 in memory on localhost:34191 (size: 24.0 B, free: 1918.4 MiB) 20:45:58.020 INFO SparkContext - Created broadcast 1974 from broadcast at MarkDuplicatesSparkUtils.java:126 20:45:58.020 INFO MemoryStore - Block broadcast_1975 stored as values in memory (estimated size 136.0 B, free 1913.4 MiB) 20:45:58.020 INFO MemoryStore - Block broadcast_1975_piece0 stored as bytes in memory (estimated size 21.0 B, free 1913.4 MiB) 20:45:58.020 INFO BlockManagerInfo - Added broadcast_1975_piece0 in memory on localhost:34191 (size: 21.0 B, free: 1918.4 MiB) 20:45:58.020 INFO SparkContext - Created broadcast 1975 from broadcast at MarkDuplicatesSparkUtils.java:127 20:45:58.032 INFO SparkContext - Starting job: collectAsMap at MarkDuplicatesSparkUtils.java:542 20:45:58.032 INFO DAGScheduler - Registering RDD 5476 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) as input to shuffle 305 20:45:58.032 INFO DAGScheduler - Registering RDD 5480 (mapToPair at MarkDuplicatesSpark.java:217) as input to shuffle 304 20:45:58.032 INFO DAGScheduler - Registering RDD 5484 (mapToPair at MarkDuplicatesSparkUtils.java:497) as input to shuffle 303 20:45:58.033 INFO DAGScheduler - Got job 557 (collectAsMap at MarkDuplicatesSparkUtils.java:542) with 1 output partitions 20:45:58.033 INFO DAGScheduler - Final stage: ResultStage 1227 (collectAsMap at MarkDuplicatesSparkUtils.java:542) 20:45:58.033 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1226) 20:45:58.033 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1226) 20:45:58.033 INFO DAGScheduler - Submitting ShuffleMapStage 1224 (MapPartitionsRDD[5476] at flatMapToPair at MarkDuplicatesSparkUtils.java:130), which has no missing parents 20:45:58.033 INFO MemoryStore - Block broadcast_1976 stored as values in memory (estimated size 21.1 KiB, free 1913.4 MiB) 20:45:58.039 INFO MemoryStore - Block broadcast_1976_piece0 stored as bytes in memory (estimated size 10.9 KiB, free 1913.3 MiB) 20:45:58.039 INFO BlockManagerInfo - Added broadcast_1976_piece0 in memory on localhost:34191 (size: 10.9 KiB, free: 1918.4 MiB) 20:45:58.039 INFO BlockManagerInfo - Removed broadcast_1951_piece0 on localhost:34191 in memory (size: 13.2 KiB, free: 1918.4 MiB) 20:45:58.039 INFO SparkContext - Created broadcast 1976 from broadcast at DAGScheduler.scala:1580 20:45:58.040 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1224 (MapPartitionsRDD[5476] at flatMapToPair at MarkDuplicatesSparkUtils.java:130) (first 15 tasks are for partitions Vector(0)) 20:45:58.040 INFO TaskSchedulerImpl - Adding task set 1224.0 with 1 tasks resource profile 0 20:45:58.040 INFO BlockManagerInfo - Removed broadcast_1958_piece0 on localhost:34191 in memory (size: 64.4 KiB, free: 1918.4 MiB) 20:45:58.040 INFO BlockManagerInfo - Removed broadcast_1969_piece0 on localhost:34191 in memory (size: 81.0 KiB, free: 1918.5 MiB) 20:45:58.040 INFO TaskSetManager - Starting task 0.0 in stage 1224.0 (TID 1063) (localhost, executor driver, partition 0, NODE_LOCAL, 9890 bytes) 20:45:58.041 INFO Executor - Running task 0.0 in stage 1224.0 (TID 1063) 20:45:58.041 INFO BlockManagerInfo - Removed broadcast_1945_piece0 on localhost:34191 in memory (size: 64.3 KiB, free: 1918.6 MiB) 20:45:58.042 INFO BlockManagerInfo - Removed broadcast_1941_piece0 on localhost:34191 in memory (size: 663.0 B, free: 1918.6 MiB) 20:45:58.042 INFO BlockManagerInfo - Removed broadcast_1964_piece0 on localhost:34191 in memory (size: 22.0 KiB, free: 1918.6 MiB) 20:45:58.042 INFO BlockManagerInfo - Removed broadcast_1955_piece0 on localhost:34191 in memory (size: 8.0 KiB, free: 1918.6 MiB) 20:45:58.043 INFO BlockManagerInfo - Removed broadcast_1954_piece0 on localhost:34191 in memory (size: 663.0 B, free: 1918.6 MiB) 20:45:58.043 INFO ShuffleBlockFetcherIterator - Getting 1 (490.0 B) non-empty blocks including 1 (490.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.043 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.043 INFO BlockManagerInfo - Removed broadcast_1970_piece0 on localhost:34191 in memory (size: 64.4 KiB, free: 1918.7 MiB) 20:45:58.043 INFO BlockManagerInfo - Removed broadcast_1952_piece0 on localhost:34191 in memory (size: 8.5 KiB, free: 1918.7 MiB) 20:45:58.044 INFO BlockManagerInfo - Removed broadcast_1956_piece0 on localhost:34191 in memory (size: 72.1 KiB, free: 1918.8 MiB) 20:45:58.044 INFO BlockManagerInfo - Removed broadcast_1949_piece0 on localhost:34191 in memory (size: 31.0 B, free: 1918.8 MiB) 20:45:58.044 INFO BlockManagerInfo - Removed broadcast_1963_piece0 on localhost:34191 in memory (size: 19.8 KiB, free: 1918.8 MiB) 20:45:58.045 INFO BlockManagerInfo - Removed broadcast_1972_piece0 on localhost:34191 in memory (size: 201.1 KiB, free: 1919.0 MiB) 20:45:58.045 INFO BlockManagerInfo - Removed broadcast_1935_piece0 on localhost:34191 in memory (size: 19.0 B, free: 1919.0 MiB) 20:45:58.046 INFO BlockManagerInfo - Removed broadcast_1973_piece0 on localhost:34191 in memory (size: 4.1 KiB, free: 1919.0 MiB) 20:45:58.046 INFO Executor - Finished task 0.0 in stage 1224.0 (TID 1063). 1922 bytes result sent to driver 20:45:58.046 INFO TaskSetManager - Finished task 0.0 in stage 1224.0 (TID 1063) in 6 ms on localhost (executor driver) (1/1) 20:45:58.046 INFO TaskSchedulerImpl - Removed TaskSet 1224.0, whose tasks have all completed, from pool 20:45:58.046 INFO DAGScheduler - ShuffleMapStage 1224 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) finished in 0.013 s 20:45:58.046 INFO DAGScheduler - looking for newly runnable stages 20:45:58.046 INFO DAGScheduler - running: HashSet() 20:45:58.046 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1225, ShuffleMapStage 1226, ResultStage 1227) 20:45:58.047 INFO DAGScheduler - failed: HashSet() 20:45:58.047 INFO DAGScheduler - Submitting ShuffleMapStage 1225 (MapPartitionsRDD[5480] at mapToPair at MarkDuplicatesSpark.java:217), which has no missing parents 20:45:58.047 INFO MemoryStore - Block broadcast_1977 stored as values in memory (estimated size 25.2 KiB, free 1915.8 MiB) 20:45:58.048 INFO MemoryStore - Block broadcast_1977_piece0 stored as bytes in memory (estimated size 13.1 KiB, free 1915.9 MiB) 20:45:58.048 INFO BlockManagerInfo - Removed broadcast_1944_piece0 on localhost:34191 in memory (size: 64.3 KiB, free: 1919.0 MiB) 20:45:58.048 INFO BlockManagerInfo - Added broadcast_1977_piece0 in memory on localhost:34191 (size: 13.1 KiB, free: 1919.0 MiB) 20:45:58.048 INFO SparkContext - Created broadcast 1977 from broadcast at DAGScheduler.scala:1580 20:45:58.049 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1225 (MapPartitionsRDD[5480] at mapToPair at MarkDuplicatesSpark.java:217) (first 15 tasks are for partitions Vector(0)) 20:45:58.049 INFO TaskSchedulerImpl - Adding task set 1225.0 with 1 tasks resource profile 0 20:45:58.049 INFO BlockManagerInfo - Removed broadcast_1966_piece0 on localhost:34191 in memory (size: 18.0 KiB, free: 1919.0 MiB) 20:45:58.049 INFO TaskSetManager - Starting task 0.0 in stage 1225.0 (TID 1064) (localhost, executor driver, partition 0, NODE_LOCAL, 9365 bytes) 20:45:58.049 INFO Executor - Running task 0.0 in stage 1225.0 (TID 1064) 20:45:58.050 INFO BlockManagerInfo - Removed broadcast_1962_piece0 on localhost:34191 in memory (size: 209.0 B, free: 1919.0 MiB) 20:45:58.051 INFO BlockManagerInfo - Removed broadcast_1965_piece0 on localhost:34191 in memory (size: 17.3 KiB, free: 1919.1 MiB) 20:45:58.051 INFO ShuffleBlockFetcherIterator - Getting 1 (106.0 B) non-empty blocks including 1 (106.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.051 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.051 INFO BlockManagerInfo - Removed broadcast_1946_piece0 on localhost:34191 in memory (size: 201.2 KiB, free: 1919.3 MiB) 20:45:58.052 INFO BlockManagerInfo - Removed broadcast_1960_piece0 on localhost:34191 in memory (size: 4.1 KiB, free: 1919.3 MiB) 20:45:58.052 INFO BlockManagerInfo - Removed broadcast_1961_piece0 on localhost:34191 in memory (size: 540.0 B, free: 1919.3 MiB) 20:45:58.052 INFO BlockManagerInfo - Removed broadcast_1936_piece0 on localhost:34191 in memory (size: 31.0 B, free: 1919.3 MiB) 20:45:58.053 INFO BlockManagerInfo - Removed broadcast_1950_piece0 on localhost:34191 in memory (size: 11.2 KiB, free: 1919.3 MiB) 20:45:58.053 INFO BlockManagerInfo - Removed broadcast_1948_piece0 on localhost:34191 in memory (size: 19.0 B, free: 1919.3 MiB) 20:45:58.054 INFO BlockManagerInfo - Removed broadcast_1943_piece0 on localhost:34191 in memory (size: 72.1 KiB, free: 1919.3 MiB) 20:45:58.054 INFO BlockManagerInfo - Removed broadcast_1947_piece0 on localhost:34191 in memory (size: 4.1 KiB, free: 1919.3 MiB) 20:45:58.054 INFO BlockManagerInfo - Removed broadcast_1959_piece0 on localhost:34191 in memory (size: 210.1 KiB, free: 1919.6 MiB) 20:45:58.055 INFO BlockManagerInfo - Removed broadcast_1932_piece0 on localhost:34191 in memory (size: 64.3 KiB, free: 1919.6 MiB) 20:45:58.056 INFO BlockManagerInfo - Removed broadcast_1953_piece0 on localhost:34191 in memory (size: 9.1 KiB, free: 1919.6 MiB) 20:45:58.056 INFO Executor - Finished task 0.0 in stage 1225.0 (TID 1064). 1922 bytes result sent to driver 20:45:58.056 INFO BlockManagerInfo - Removed broadcast_1967_piece0 on localhost:34191 in memory (size: 7.9 KiB, free: 1919.6 MiB) 20:45:58.056 INFO TaskSetManager - Finished task 0.0 in stage 1225.0 (TID 1064) in 7 ms on localhost (executor driver) (1/1) 20:45:58.056 INFO TaskSchedulerImpl - Removed TaskSet 1225.0, whose tasks have all completed, from pool 20:45:58.056 INFO DAGScheduler - ShuffleMapStage 1225 (mapToPair at MarkDuplicatesSpark.java:217) finished in 0.009 s 20:45:58.056 INFO DAGScheduler - looking for newly runnable stages 20:45:58.056 INFO DAGScheduler - running: HashSet() 20:45:58.056 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1226, ResultStage 1227) 20:45:58.056 INFO DAGScheduler - failed: HashSet() 20:45:58.056 INFO DAGScheduler - Submitting ShuffleMapStage 1226 (MapPartitionsRDD[5484] at mapToPair at MarkDuplicatesSparkUtils.java:497), which has no missing parents 20:45:58.057 INFO BlockManagerInfo - Removed broadcast_1957_piece0 on localhost:34191 in memory (size: 64.4 KiB, free: 1919.7 MiB) 20:45:58.057 INFO MemoryStore - Block broadcast_1978 stored as values in memory (estimated size 15.9 KiB, free 1918.8 MiB) 20:45:58.057 INFO BlockManagerInfo - Removed broadcast_1968_piece0 on localhost:34191 in memory (size: 16.8 KiB, free: 1919.7 MiB) 20:45:58.058 INFO MemoryStore - Block broadcast_1978_piece0 stored as bytes in memory (estimated size 8.3 KiB, free 1918.9 MiB) 20:45:58.058 INFO BlockManagerInfo - Added broadcast_1978_piece0 in memory on localhost:34191 (size: 8.3 KiB, free: 1919.7 MiB) 20:45:58.058 INFO SparkContext - Created broadcast 1978 from broadcast at DAGScheduler.scala:1580 20:45:58.058 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1226 (MapPartitionsRDD[5484] at mapToPair at MarkDuplicatesSparkUtils.java:497) (first 15 tasks are for partitions Vector(0)) 20:45:58.058 INFO TaskSchedulerImpl - Adding task set 1226.0 with 1 tasks resource profile 0 20:45:58.058 INFO TaskSetManager - Starting task 0.0 in stage 1226.0 (TID 1065) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 20:45:58.059 INFO Executor - Running task 0.0 in stage 1226.0 (TID 1065) 20:45:58.060 INFO ShuffleBlockFetcherIterator - Getting 1 (490.0 B) non-empty blocks including 1 (490.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.060 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.062 INFO ShuffleBlockFetcherIterator - Getting 1 (60.0 B) non-empty blocks including 1 (60.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.062 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.063 INFO Executor - Finished task 0.0 in stage 1226.0 (TID 1065). 1922 bytes result sent to driver 20:45:58.064 INFO TaskSetManager - Finished task 0.0 in stage 1226.0 (TID 1065) in 6 ms on localhost (executor driver) (1/1) 20:45:58.064 INFO TaskSchedulerImpl - Removed TaskSet 1226.0, whose tasks have all completed, from pool 20:45:58.064 INFO DAGScheduler - ShuffleMapStage 1226 (mapToPair at MarkDuplicatesSparkUtils.java:497) finished in 0.007 s 20:45:58.064 INFO DAGScheduler - looking for newly runnable stages 20:45:58.064 INFO DAGScheduler - running: HashSet() 20:45:58.064 INFO DAGScheduler - waiting: HashSet(ResultStage 1227) 20:45:58.064 INFO DAGScheduler - failed: HashSet() 20:45:58.064 INFO DAGScheduler - Submitting ResultStage 1227 (MapPartitionsRDD[5486] at mapValues at MarkDuplicatesSparkUtils.java:519), which has no missing parents 20:45:58.065 INFO MemoryStore - Block broadcast_1979 stored as values in memory (estimated size 17.1 KiB, free 1918.9 MiB) 20:45:58.065 INFO MemoryStore - Block broadcast_1979_piece0 stored as bytes in memory (estimated size 9.0 KiB, free 1918.9 MiB) 20:45:58.065 INFO BlockManagerInfo - Added broadcast_1979_piece0 in memory on localhost:34191 (size: 9.0 KiB, free: 1919.7 MiB) 20:45:58.065 INFO SparkContext - Created broadcast 1979 from broadcast at DAGScheduler.scala:1580 20:45:58.065 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1227 (MapPartitionsRDD[5486] at mapValues at MarkDuplicatesSparkUtils.java:519) (first 15 tasks are for partitions Vector(0)) 20:45:58.065 INFO TaskSchedulerImpl - Adding task set 1227.0 with 1 tasks resource profile 0 20:45:58.066 INFO TaskSetManager - Starting task 0.0 in stage 1227.0 (TID 1066) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 20:45:58.066 INFO Executor - Running task 0.0 in stage 1227.0 (TID 1066) 20:45:58.067 INFO ShuffleBlockFetcherIterator - Getting 1 (129.0 B) non-empty blocks including 1 (129.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.067 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.069 INFO Executor - Finished task 0.0 in stage 1227.0 (TID 1066). 1930 bytes result sent to driver 20:45:58.069 INFO TaskSetManager - Finished task 0.0 in stage 1227.0 (TID 1066) in 3 ms on localhost (executor driver) (1/1) 20:45:58.069 INFO TaskSchedulerImpl - Removed TaskSet 1227.0, whose tasks have all completed, from pool 20:45:58.069 INFO DAGScheduler - ResultStage 1227 (collectAsMap at MarkDuplicatesSparkUtils.java:542) finished in 0.005 s 20:45:58.069 INFO DAGScheduler - Job 557 is finished. Cancelling potential speculative or zombie tasks for this job 20:45:58.069 INFO TaskSchedulerImpl - Killing all running tasks in stage 1227: Stage finished 20:45:58.069 INFO DAGScheduler - Job 557 finished: collectAsMap at MarkDuplicatesSparkUtils.java:542, took 0.037322 s 20:45:58.084 INFO MemoryStore - Block broadcast_1980 stored as values in memory (estimated size 5.6 KiB, free 1918.9 MiB) 20:45:58.085 INFO MemoryStore - Block broadcast_1980_piece0 stored as bytes in memory (estimated size 500.0 B, free 1918.9 MiB) 20:45:58.085 INFO BlockManagerInfo - Added broadcast_1980_piece0 in memory on localhost:34191 (size: 500.0 B, free: 1919.7 MiB) 20:45:58.085 INFO SparkContext - Created broadcast 1980 from broadcast at ReadsSparkSink.java:133 20:45:58.090 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 20:45:58.090 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:45:58.090 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:45:58.106 INFO SparkContext - Starting job: runJob at SparkHadoopWriter.scala:83 20:45:58.106 INFO DAGScheduler - Registering RDD 5488 (mapToPair at SparkUtils.java:161) as input to shuffle 306 20:45:58.106 INFO DAGScheduler - Got job 558 (runJob at SparkHadoopWriter.scala:83) with 1 output partitions 20:45:58.107 INFO DAGScheduler - Final stage: ResultStage 1232 (runJob at SparkHadoopWriter.scala:83) 20:45:58.107 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1231) 20:45:58.107 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1231) 20:45:58.107 INFO DAGScheduler - Submitting ShuffleMapStage 1231 (MapPartitionsRDD[5488] at mapToPair at SparkUtils.java:161), which has no missing parents 20:45:58.107 INFO MemoryStore - Block broadcast_1981 stored as values in memory (estimated size 15.0 KiB, free 1918.9 MiB) 20:45:58.108 INFO MemoryStore - Block broadcast_1981_piece0 stored as bytes in memory (estimated size 7.8 KiB, free 1918.9 MiB) 20:45:58.108 INFO BlockManagerInfo - Added broadcast_1981_piece0 in memory on localhost:34191 (size: 7.8 KiB, free: 1919.7 MiB) 20:45:58.108 INFO SparkContext - Created broadcast 1981 from broadcast at DAGScheduler.scala:1580 20:45:58.108 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1231 (MapPartitionsRDD[5488] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 20:45:58.108 INFO TaskSchedulerImpl - Adding task set 1231.0 with 1 tasks resource profile 0 20:45:58.109 INFO TaskSetManager - Starting task 0.0 in stage 1231.0 (TID 1067) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 20:45:58.109 INFO Executor - Running task 0.0 in stage 1231.0 (TID 1067) 20:45:58.110 INFO ShuffleBlockFetcherIterator - Getting 1 (490.0 B) non-empty blocks including 1 (490.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.110 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.112 INFO ShuffleBlockFetcherIterator - Getting 1 (60.0 B) non-empty blocks including 1 (60.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.112 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.112 INFO Executor - Finished task 0.0 in stage 1231.0 (TID 1067). 1922 bytes result sent to driver 20:45:58.113 INFO TaskSetManager - Finished task 0.0 in stage 1231.0 (TID 1067) in 4 ms on localhost (executor driver) (1/1) 20:45:58.113 INFO TaskSchedulerImpl - Removed TaskSet 1231.0, whose tasks have all completed, from pool 20:45:58.113 INFO DAGScheduler - ShuffleMapStage 1231 (mapToPair at SparkUtils.java:161) finished in 0.006 s 20:45:58.113 INFO DAGScheduler - looking for newly runnable stages 20:45:58.113 INFO DAGScheduler - running: HashSet() 20:45:58.113 INFO DAGScheduler - waiting: HashSet(ResultStage 1232) 20:45:58.113 INFO DAGScheduler - failed: HashSet() 20:45:58.113 INFO DAGScheduler - Submitting ResultStage 1232 (MapPartitionsRDD[5494] at saveAsTextFile at SamSink.java:65), which has no missing parents 20:45:58.119 INFO MemoryStore - Block broadcast_1982 stored as values in memory (estimated size 160.3 KiB, free 1918.7 MiB) 20:45:58.120 INFO MemoryStore - Block broadcast_1982_piece0 stored as bytes in memory (estimated size 72.1 KiB, free 1918.6 MiB) 20:45:58.120 INFO BlockManagerInfo - Added broadcast_1982_piece0 in memory on localhost:34191 (size: 72.1 KiB, free: 1919.6 MiB) 20:45:58.120 INFO SparkContext - Created broadcast 1982 from broadcast at DAGScheduler.scala:1580 20:45:58.121 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1232 (MapPartitionsRDD[5494] at saveAsTextFile at SamSink.java:65) (first 15 tasks are for partitions Vector(0)) 20:45:58.121 INFO TaskSchedulerImpl - Adding task set 1232.0 with 1 tasks resource profile 0 20:45:58.121 INFO TaskSetManager - Starting task 0.0 in stage 1232.0 (TID 1068) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 20:45:58.121 INFO Executor - Running task 0.0 in stage 1232.0 (TID 1068) 20:45:58.124 INFO ShuffleBlockFetcherIterator - Getting 1 (490.0 B) non-empty blocks including 1 (490.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.124 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.126 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 20:45:58.126 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:45:58.126 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:45:58.136 INFO FileOutputCommitter - Saved output of task 'attempt_202506122045584660249185200992794_5494_m_000000_0' to file:/tmp/MarkDups3401605899169319998/MarkDups.sam.parts/_temporary/0/task_202506122045584660249185200992794_5494_m_000000 20:45:58.136 INFO SparkHadoopMapRedUtil - attempt_202506122045584660249185200992794_5494_m_000000_0: Committed. Elapsed time: 0 ms. 20:45:58.137 INFO Executor - Finished task 0.0 in stage 1232.0 (TID 1068). 1858 bytes result sent to driver 20:45:58.137 INFO TaskSetManager - Finished task 0.0 in stage 1232.0 (TID 1068) in 16 ms on localhost (executor driver) (1/1) 20:45:58.137 INFO TaskSchedulerImpl - Removed TaskSet 1232.0, whose tasks have all completed, from pool 20:45:58.137 INFO DAGScheduler - ResultStage 1232 (runJob at SparkHadoopWriter.scala:83) finished in 0.024 s 20:45:58.137 INFO DAGScheduler - Job 558 is finished. Cancelling potential speculative or zombie tasks for this job 20:45:58.137 INFO TaskSchedulerImpl - Killing all running tasks in stage 1232: Stage finished 20:45:58.137 INFO DAGScheduler - Job 558 finished: runJob at SparkHadoopWriter.scala:83, took 0.031365 s 20:45:58.138 INFO SparkHadoopWriter - Start to commit write Job job_202506122045584660249185200992794_5494. 20:45:58.142 INFO SparkHadoopWriter - Write Job job_202506122045584660249185200992794_5494 committed. Elapsed time: 4 ms. 20:45:58.149 INFO HadoopFileSystemWrapper - Concatenating 2 parts to /tmp/MarkDups3401605899169319998/MarkDups.sam 20:45:58.153 INFO HadoopFileSystemWrapper - Concatenating to /tmp/MarkDups3401605899169319998/MarkDups.sam done 20:45:58.153 INFO MarkDuplicatesSpark - Shutting down engine [June 12, 2025 at 8:45:58 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 20:45:58.158 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:45:58.158 INFO MarkDuplicatesSpark - org.broadinstitute.hellbender.tools.spark.transforms.markduplicates vUnavailable 20:45:58.159 INFO MarkDuplicatesSpark - For support and documentation go to https://software.broadinstitute.org/gatk/ 20:45:58.159 INFO MarkDuplicatesSpark - Executing as runner@pkrvmxyh4eaekms on Linux v6.11.0-1015-azure amd64 20:45:58.159 INFO MarkDuplicatesSpark - Java runtime: OpenJDK 64-Bit Server VM v17.0.6+10 20:45:58.159 INFO MarkDuplicatesSpark - Start Date/Time: June 12, 2025 at 8:45:58 PM UTC 20:45:58.159 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:45:58.159 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:45:58.159 INFO MarkDuplicatesSpark - HTSJDK Defaults.COMPRESSION_LEVEL : 2 20:45:58.159 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 20:45:58.159 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 20:45:58.159 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 20:45:58.159 INFO MarkDuplicatesSpark - Deflater: IntelDeflater 20:45:58.159 INFO MarkDuplicatesSpark - Inflater: IntelInflater 20:45:58.159 INFO MarkDuplicatesSpark - GCS max retries/reopens: 20 20:45:58.159 INFO MarkDuplicatesSpark - Requester pays: disabled 20:45:58.159 INFO MarkDuplicatesSpark - Initializing engine 20:45:58.159 INFO MarkDuplicatesSpark - Done initializing engine 20:45:58.159 INFO MarkDuplicatesSpark - Spark verbosity set to INFO (see --spark-verbosity argument) 20:45:58.161 INFO MemoryStore - Block broadcast_1983 stored as values in memory (estimated size 306.3 KiB, free 1918.3 MiB) 20:45:58.168 INFO MemoryStore - Block broadcast_1983_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1918.3 MiB) 20:45:58.168 INFO BlockManagerInfo - Added broadcast_1983_piece0 in memory on localhost:34191 (size: 64.4 KiB, free: 1919.6 MiB) 20:45:58.168 INFO SparkContext - Created broadcast 1983 from newAPIHadoopFile at PathSplitSource.java:96 20:45:58.189 INFO MemoryStore - Block broadcast_1984 stored as values in memory (estimated size 306.3 KiB, free 1918.0 MiB) 20:45:58.195 INFO MemoryStore - Block broadcast_1984_piece0 stored as bytes in memory (estimated size 64.4 KiB, free 1917.9 MiB) 20:45:58.195 INFO BlockManagerInfo - Added broadcast_1984_piece0 in memory on localhost:34191 (size: 64.4 KiB, free: 1919.5 MiB) 20:45:58.196 INFO SparkContext - Created broadcast 1984 from newAPIHadoopFile at PathSplitSource.java:96 20:45:58.228 INFO SparkContext - Starting job: collect at SparkUtils.java:205 20:45:58.229 INFO FileInputFormat - Total input files to process : 1 20:45:58.229 INFO DAGScheduler - Registering RDD 5508 (mapToPair at SparkUtils.java:161) as input to shuffle 307 20:45:58.229 INFO DAGScheduler - Got job 559 (collect at SparkUtils.java:205) with 1 output partitions 20:45:58.229 INFO DAGScheduler - Final stage: ResultStage 1234 (collect at SparkUtils.java:205) 20:45:58.229 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1233) 20:45:58.229 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1233) 20:45:58.230 INFO DAGScheduler - Submitting ShuffleMapStage 1233 (MapPartitionsRDD[5508] at mapToPair at SparkUtils.java:161), which has no missing parents 20:45:58.246 INFO MemoryStore - Block broadcast_1985 stored as values in memory (estimated size 481.4 KiB, free 1917.4 MiB) 20:45:58.249 INFO MemoryStore - Block broadcast_1985_piece0 stored as bytes in memory (estimated size 207.4 KiB, free 1917.2 MiB) 20:45:58.249 INFO BlockManagerInfo - Added broadcast_1985_piece0 in memory on localhost:34191 (size: 207.4 KiB, free: 1919.3 MiB) 20:45:58.249 INFO SparkContext - Created broadcast 1985 from broadcast at DAGScheduler.scala:1580 20:45:58.249 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1233 (MapPartitionsRDD[5508] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 20:45:58.249 INFO TaskSchedulerImpl - Adding task set 1233.0 with 1 tasks resource profile 0 20:45:58.249 INFO TaskSetManager - Starting task 0.0 in stage 1233.0 (TID 1069) (localhost, executor driver, partition 0, PROCESS_LOCAL, 9757 bytes) 20:45:58.249 INFO Executor - Running task 0.0 in stage 1233.0 (TID 1069) 20:45:58.280 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/inputSingleLibrarySolexa16404.bam:0+5416 20:45:58.285 INFO Executor - Finished task 0.0 in stage 1233.0 (TID 1069). 1148 bytes result sent to driver 20:45:58.285 INFO TaskSetManager - Finished task 0.0 in stage 1233.0 (TID 1069) in 36 ms on localhost (executor driver) (1/1) 20:45:58.285 INFO TaskSchedulerImpl - Removed TaskSet 1233.0, whose tasks have all completed, from pool 20:45:58.286 INFO DAGScheduler - ShuffleMapStage 1233 (mapToPair at SparkUtils.java:161) finished in 0.056 s 20:45:58.286 INFO DAGScheduler - looking for newly runnable stages 20:45:58.286 INFO DAGScheduler - running: HashSet() 20:45:58.286 INFO DAGScheduler - waiting: HashSet(ResultStage 1234) 20:45:58.286 INFO DAGScheduler - failed: HashSet() 20:45:58.286 INFO DAGScheduler - Submitting ResultStage 1234 (MapPartitionsRDD[5511] at mapPartitions at SparkUtils.java:188), which has no missing parents 20:45:58.286 INFO MemoryStore - Block broadcast_1986 stored as values in memory (estimated size 7.4 KiB, free 1917.2 MiB) 20:45:58.287 INFO MemoryStore - Block broadcast_1986_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1917.2 MiB) 20:45:58.287 INFO BlockManagerInfo - Added broadcast_1986_piece0 in memory on localhost:34191 (size: 4.1 KiB, free: 1919.3 MiB) 20:45:58.287 INFO SparkContext - Created broadcast 1986 from broadcast at DAGScheduler.scala:1580 20:45:58.287 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1234 (MapPartitionsRDD[5511] at mapPartitions at SparkUtils.java:188) (first 15 tasks are for partitions Vector(0)) 20:45:58.287 INFO TaskSchedulerImpl - Adding task set 1234.0 with 1 tasks resource profile 0 20:45:58.287 INFO TaskSetManager - Starting task 0.0 in stage 1234.0 (TID 1070) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 20:45:58.288 INFO Executor - Running task 0.0 in stage 1234.0 (TID 1070) 20:45:58.288 INFO ShuffleBlockFetcherIterator - Getting 1 (4.3 KiB) non-empty blocks including 1 (4.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.289 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.290 INFO Executor - Finished task 0.0 in stage 1234.0 (TID 1070). 2039 bytes result sent to driver 20:45:58.290 INFO TaskSetManager - Finished task 0.0 in stage 1234.0 (TID 1070) in 3 ms on localhost (executor driver) (1/1) 20:45:58.290 INFO TaskSchedulerImpl - Removed TaskSet 1234.0, whose tasks have all completed, from pool 20:45:58.290 INFO DAGScheduler - ResultStage 1234 (collect at SparkUtils.java:205) finished in 0.004 s 20:45:58.290 INFO DAGScheduler - Job 559 is finished. Cancelling potential speculative or zombie tasks for this job 20:45:58.290 INFO TaskSchedulerImpl - Killing all running tasks in stage 1234: Stage finished 20:45:58.290 INFO DAGScheduler - Job 559 finished: collect at SparkUtils.java:205, took 0.061968 s 20:45:58.296 INFO MemoryStore - Block broadcast_1987 stored as values in memory (estimated size 392.0 B, free 1917.2 MiB) 20:45:58.296 INFO MemoryStore - Block broadcast_1987_piece0 stored as bytes in memory (estimated size 33.0 B, free 1917.2 MiB) 20:45:58.296 INFO BlockManagerInfo - Added broadcast_1987_piece0 in memory on localhost:34191 (size: 33.0 B, free: 1919.3 MiB) 20:45:58.297 INFO SparkContext - Created broadcast 1987 from broadcast at MarkDuplicatesSparkUtils.java:126 20:45:58.297 INFO MemoryStore - Block broadcast_1988 stored as values in memory (estimated size 144.0 B, free 1917.2 MiB) 20:45:58.297 INFO MemoryStore - Block broadcast_1988_piece0 stored as bytes in memory (estimated size 28.0 B, free 1917.2 MiB) 20:45:58.297 INFO BlockManagerInfo - Added broadcast_1988_piece0 in memory on localhost:34191 (size: 28.0 B, free: 1919.3 MiB) 20:45:58.297 INFO SparkContext - Created broadcast 1988 from broadcast at MarkDuplicatesSparkUtils.java:127 20:45:58.310 INFO SparkContext - Starting job: collectAsMap at MarkDuplicatesSparkUtils.java:542 20:45:58.311 INFO DAGScheduler - Registering RDD 5520 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) as input to shuffle 310 20:45:58.311 INFO DAGScheduler - Registering RDD 5524 (mapToPair at MarkDuplicatesSpark.java:217) as input to shuffle 309 20:45:58.311 INFO DAGScheduler - Registering RDD 5528 (mapToPair at MarkDuplicatesSparkUtils.java:497) as input to shuffle 308 20:45:58.311 INFO DAGScheduler - Got job 560 (collectAsMap at MarkDuplicatesSparkUtils.java:542) with 1 output partitions 20:45:58.311 INFO DAGScheduler - Final stage: ResultStage 1239 (collectAsMap at MarkDuplicatesSparkUtils.java:542) 20:45:58.311 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1238) 20:45:58.311 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1238) 20:45:58.311 INFO DAGScheduler - Submitting ShuffleMapStage 1236 (MapPartitionsRDD[5520] at flatMapToPair at MarkDuplicatesSparkUtils.java:130), which has no missing parents 20:45:58.312 INFO MemoryStore - Block broadcast_1989 stored as values in memory (estimated size 45.2 KiB, free 1917.2 MiB) 20:45:58.313 INFO MemoryStore - Block broadcast_1989_piece0 stored as bytes in memory (estimated size 17.1 KiB, free 1917.2 MiB) 20:45:58.313 INFO BlockManagerInfo - Added broadcast_1989_piece0 in memory on localhost:34191 (size: 17.1 KiB, free: 1919.3 MiB) 20:45:58.313 INFO SparkContext - Created broadcast 1989 from broadcast at DAGScheduler.scala:1580 20:45:58.313 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1236 (MapPartitionsRDD[5520] at flatMapToPair at MarkDuplicatesSparkUtils.java:130) (first 15 tasks are for partitions Vector(0)) 20:45:58.313 INFO TaskSchedulerImpl - Adding task set 1236.0 with 1 tasks resource profile 0 20:45:58.313 INFO TaskSetManager - Starting task 0.0 in stage 1236.0 (TID 1071) (localhost, executor driver, partition 0, NODE_LOCAL, 9890 bytes) 20:45:58.314 INFO Executor - Running task 0.0 in stage 1236.0 (TID 1071) 20:45:58.315 INFO ShuffleBlockFetcherIterator - Getting 1 (4.3 KiB) non-empty blocks including 1 (4.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.315 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.318 INFO Executor - Finished task 0.0 in stage 1236.0 (TID 1071). 1922 bytes result sent to driver 20:45:58.318 INFO TaskSetManager - Finished task 0.0 in stage 1236.0 (TID 1071) in 5 ms on localhost (executor driver) (1/1) 20:45:58.318 INFO TaskSchedulerImpl - Removed TaskSet 1236.0, whose tasks have all completed, from pool 20:45:58.318 INFO DAGScheduler - ShuffleMapStage 1236 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) finished in 0.006 s 20:45:58.318 INFO DAGScheduler - looking for newly runnable stages 20:45:58.318 INFO DAGScheduler - running: HashSet() 20:45:58.318 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1237, ShuffleMapStage 1238, ResultStage 1239) 20:45:58.318 INFO DAGScheduler - failed: HashSet() 20:45:58.318 INFO DAGScheduler - Submitting ShuffleMapStage 1237 (MapPartitionsRDD[5524] at mapToPair at MarkDuplicatesSpark.java:217), which has no missing parents 20:45:58.319 INFO MemoryStore - Block broadcast_1990 stored as values in memory (estimated size 49.3 KiB, free 1917.1 MiB) 20:45:58.319 INFO MemoryStore - Block broadcast_1990_piece0 stored as bytes in memory (estimated size 19.3 KiB, free 1917.1 MiB) 20:45:58.320 INFO BlockManagerInfo - Added broadcast_1990_piece0 in memory on localhost:34191 (size: 19.3 KiB, free: 1919.2 MiB) 20:45:58.320 INFO SparkContext - Created broadcast 1990 from broadcast at DAGScheduler.scala:1580 20:45:58.320 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1237 (MapPartitionsRDD[5524] at mapToPair at MarkDuplicatesSpark.java:217) (first 15 tasks are for partitions Vector(0)) 20:45:58.320 INFO TaskSchedulerImpl - Adding task set 1237.0 with 1 tasks resource profile 0 20:45:58.320 INFO TaskSetManager - Starting task 0.0 in stage 1237.0 (TID 1072) (localhost, executor driver, partition 0, NODE_LOCAL, 9365 bytes) 20:45:58.320 INFO Executor - Running task 0.0 in stage 1237.0 (TID 1072) 20:45:58.322 INFO ShuffleBlockFetcherIterator - Getting 1 (593.0 B) non-empty blocks including 1 (593.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.322 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.326 INFO Executor - Finished task 0.0 in stage 1237.0 (TID 1072). 1922 bytes result sent to driver 20:45:58.326 INFO TaskSetManager - Finished task 0.0 in stage 1237.0 (TID 1072) in 6 ms on localhost (executor driver) (1/1) 20:45:58.326 INFO TaskSchedulerImpl - Removed TaskSet 1237.0, whose tasks have all completed, from pool 20:45:58.327 INFO DAGScheduler - ShuffleMapStage 1237 (mapToPair at MarkDuplicatesSpark.java:217) finished in 0.008 s 20:45:58.327 INFO DAGScheduler - looking for newly runnable stages 20:45:58.327 INFO DAGScheduler - running: HashSet() 20:45:58.327 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1238, ResultStage 1239) 20:45:58.327 INFO DAGScheduler - failed: HashSet() 20:45:58.327 INFO DAGScheduler - Submitting ShuffleMapStage 1238 (MapPartitionsRDD[5528] at mapToPair at MarkDuplicatesSparkUtils.java:497), which has no missing parents 20:45:58.328 INFO MemoryStore - Block broadcast_1991 stored as values in memory (estimated size 40.0 KiB, free 1917.1 MiB) 20:45:58.328 INFO MemoryStore - Block broadcast_1991_piece0 stored as bytes in memory (estimated size 14.5 KiB, free 1917.0 MiB) 20:45:58.328 INFO BlockManagerInfo - Added broadcast_1991_piece0 in memory on localhost:34191 (size: 14.5 KiB, free: 1919.2 MiB) 20:45:58.328 INFO SparkContext - Created broadcast 1991 from broadcast at DAGScheduler.scala:1580 20:45:58.328 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1238 (MapPartitionsRDD[5528] at mapToPair at MarkDuplicatesSparkUtils.java:497) (first 15 tasks are for partitions Vector(0)) 20:45:58.329 INFO TaskSchedulerImpl - Adding task set 1238.0 with 1 tasks resource profile 0 20:45:58.329 INFO TaskSetManager - Starting task 0.0 in stage 1238.0 (TID 1073) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 20:45:58.329 INFO Executor - Running task 0.0 in stage 1238.0 (TID 1073) 20:45:58.330 INFO ShuffleBlockFetcherIterator - Getting 1 (4.3 KiB) non-empty blocks including 1 (4.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.330 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.332 INFO ShuffleBlockFetcherIterator - Getting 1 (652.0 B) non-empty blocks including 1 (652.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.332 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.333 INFO Executor - Finished task 0.0 in stage 1238.0 (TID 1073). 1922 bytes result sent to driver 20:45:58.333 INFO TaskSetManager - Finished task 0.0 in stage 1238.0 (TID 1073) in 4 ms on localhost (executor driver) (1/1) 20:45:58.333 INFO TaskSchedulerImpl - Removed TaskSet 1238.0, whose tasks have all completed, from pool 20:45:58.333 INFO DAGScheduler - ShuffleMapStage 1238 (mapToPair at MarkDuplicatesSparkUtils.java:497) finished in 0.006 s 20:45:58.333 INFO DAGScheduler - looking for newly runnable stages 20:45:58.333 INFO DAGScheduler - running: HashSet() 20:45:58.333 INFO DAGScheduler - waiting: HashSet(ResultStage 1239) 20:45:58.333 INFO DAGScheduler - failed: HashSet() 20:45:58.333 INFO DAGScheduler - Submitting ResultStage 1239 (MapPartitionsRDD[5530] at mapValues at MarkDuplicatesSparkUtils.java:519), which has no missing parents 20:45:58.334 INFO MemoryStore - Block broadcast_1992 stored as values in memory (estimated size 41.2 KiB, free 1917.0 MiB) 20:45:58.334 INFO MemoryStore - Block broadcast_1992_piece0 stored as bytes in memory (estimated size 15.3 KiB, free 1917.0 MiB) 20:45:58.335 INFO BlockManagerInfo - Added broadcast_1992_piece0 in memory on localhost:34191 (size: 15.3 KiB, free: 1919.2 MiB) 20:45:58.335 INFO SparkContext - Created broadcast 1992 from broadcast at DAGScheduler.scala:1580 20:45:58.335 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1239 (MapPartitionsRDD[5530] at mapValues at MarkDuplicatesSparkUtils.java:519) (first 15 tasks are for partitions Vector(0)) 20:45:58.335 INFO TaskSchedulerImpl - Adding task set 1239.0 with 1 tasks resource profile 0 20:45:58.335 INFO TaskSetManager - Starting task 0.0 in stage 1239.0 (TID 1074) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 20:45:58.335 INFO Executor - Running task 0.0 in stage 1239.0 (TID 1074) 20:45:58.337 INFO ShuffleBlockFetcherIterator - Getting 1 (142.0 B) non-empty blocks including 1 (142.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.337 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.338 INFO Executor - Finished task 0.0 in stage 1239.0 (TID 1074). 1944 bytes result sent to driver 20:45:58.338 INFO TaskSetManager - Finished task 0.0 in stage 1239.0 (TID 1074) in 3 ms on localhost (executor driver) (1/1) 20:45:58.338 INFO TaskSchedulerImpl - Removed TaskSet 1239.0, whose tasks have all completed, from pool 20:45:58.338 INFO DAGScheduler - ResultStage 1239 (collectAsMap at MarkDuplicatesSparkUtils.java:542) finished in 0.005 s 20:45:58.338 INFO DAGScheduler - Job 560 is finished. Cancelling potential speculative or zombie tasks for this job 20:45:58.338 INFO TaskSchedulerImpl - Killing all running tasks in stage 1239: Stage finished 20:45:58.338 INFO DAGScheduler - Job 560 finished: collectAsMap at MarkDuplicatesSparkUtils.java:542, took 0.028154 s 20:45:58.350 INFO MemoryStore - Block broadcast_1993 stored as values in memory (estimated size 93.5 KiB, free 1916.9 MiB) 20:45:58.350 INFO MemoryStore - Block broadcast_1993_piece0 stored as bytes in memory (estimated size 5.6 KiB, free 1916.9 MiB) 20:45:58.350 INFO BlockManagerInfo - Added broadcast_1993_piece0 in memory on localhost:34191 (size: 5.6 KiB, free: 1919.2 MiB) 20:45:58.351 INFO SparkContext - Created broadcast 1993 from broadcast at ReadsSparkSink.java:133 20:45:58.354 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 20:45:58.354 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:45:58.354 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:45:58.370 INFO SparkContext - Starting job: runJob at SparkHadoopWriter.scala:83 20:45:58.370 INFO DAGScheduler - Registering RDD 5532 (mapToPair at SparkUtils.java:161) as input to shuffle 311 20:45:58.370 INFO DAGScheduler - Got job 561 (runJob at SparkHadoopWriter.scala:83) with 1 output partitions 20:45:58.370 INFO DAGScheduler - Final stage: ResultStage 1244 (runJob at SparkHadoopWriter.scala:83) 20:45:58.370 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1243) 20:45:58.370 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1243) 20:45:58.371 INFO DAGScheduler - Submitting ShuffleMapStage 1243 (MapPartitionsRDD[5532] at mapToPair at SparkUtils.java:161), which has no missing parents 20:45:58.371 INFO MemoryStore - Block broadcast_1994 stored as values in memory (estimated size 39.1 KiB, free 1916.8 MiB) 20:45:58.372 INFO MemoryStore - Block broadcast_1994_piece0 stored as bytes in memory (estimated size 14.1 KiB, free 1916.8 MiB) 20:45:58.372 INFO BlockManagerInfo - Added broadcast_1994_piece0 in memory on localhost:34191 (size: 14.1 KiB, free: 1919.2 MiB) 20:45:58.372 INFO SparkContext - Created broadcast 1994 from broadcast at DAGScheduler.scala:1580 20:45:58.372 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1243 (MapPartitionsRDD[5532] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 20:45:58.372 INFO TaskSchedulerImpl - Adding task set 1243.0 with 1 tasks resource profile 0 20:45:58.373 INFO TaskSetManager - Starting task 0.0 in stage 1243.0 (TID 1075) (localhost, executor driver, partition 0, NODE_LOCAL, 9948 bytes) 20:45:58.373 INFO Executor - Running task 0.0 in stage 1243.0 (TID 1075) 20:45:58.375 INFO ShuffleBlockFetcherIterator - Getting 1 (4.3 KiB) non-empty blocks including 1 (4.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.375 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.376 INFO ShuffleBlockFetcherIterator - Getting 1 (652.0 B) non-empty blocks including 1 (652.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.376 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.377 INFO Executor - Finished task 0.0 in stage 1243.0 (TID 1075). 1922 bytes result sent to driver 20:45:58.378 INFO TaskSetManager - Finished task 0.0 in stage 1243.0 (TID 1075) in 5 ms on localhost (executor driver) (1/1) 20:45:58.378 INFO TaskSchedulerImpl - Removed TaskSet 1243.0, whose tasks have all completed, from pool 20:45:58.378 INFO DAGScheduler - ShuffleMapStage 1243 (mapToPair at SparkUtils.java:161) finished in 0.007 s 20:45:58.378 INFO DAGScheduler - looking for newly runnable stages 20:45:58.378 INFO DAGScheduler - running: HashSet() 20:45:58.378 INFO DAGScheduler - waiting: HashSet(ResultStage 1244) 20:45:58.378 INFO DAGScheduler - failed: HashSet() 20:45:58.378 INFO DAGScheduler - Submitting ResultStage 1244 (MapPartitionsRDD[5538] at saveAsTextFile at SamSink.java:65), which has no missing parents 20:45:58.385 INFO MemoryStore - Block broadcast_1995 stored as values in memory (estimated size 184.3 KiB, free 1916.7 MiB) 20:45:58.386 INFO MemoryStore - Block broadcast_1995_piece0 stored as bytes in memory (estimated size 78.3 KiB, free 1916.6 MiB) 20:45:58.386 INFO BlockManagerInfo - Added broadcast_1995_piece0 in memory on localhost:34191 (size: 78.3 KiB, free: 1919.1 MiB) 20:45:58.386 INFO SparkContext - Created broadcast 1995 from broadcast at DAGScheduler.scala:1580 20:45:58.386 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1244 (MapPartitionsRDD[5538] at saveAsTextFile at SamSink.java:65) (first 15 tasks are for partitions Vector(0)) 20:45:58.386 INFO TaskSchedulerImpl - Adding task set 1244.0 with 1 tasks resource profile 0 20:45:58.386 INFO TaskSetManager - Starting task 0.0 in stage 1244.0 (TID 1076) (localhost, executor driver, partition 0, NODE_LOCAL, 9376 bytes) 20:45:58.387 INFO Executor - Running task 0.0 in stage 1244.0 (TID 1076) 20:45:58.390 INFO ShuffleBlockFetcherIterator - Getting 1 (3.9 KiB) non-empty blocks including 1 (3.9 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:45:58.390 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:45:58.392 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 20:45:58.392 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:45:58.392 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:45:58.402 INFO FileOutputCommitter - Saved output of task 'attempt_202506122045581041024714695165288_5538_m_000000_0' to file:/tmp/MarkDups9836824339496587886/MarkDups.sam.parts/_temporary/0/task_202506122045581041024714695165288_5538_m_000000 20:45:58.403 INFO SparkHadoopMapRedUtil - attempt_202506122045581041024714695165288_5538_m_000000_0: Committed. Elapsed time: 0 ms. 20:45:58.403 INFO Executor - Finished task 0.0 in stage 1244.0 (TID 1076). 1858 bytes result sent to driver 20:45:58.403 INFO TaskSetManager - Finished task 0.0 in stage 1244.0 (TID 1076) in 17 ms on localhost (executor driver) (1/1) 20:45:58.403 INFO TaskSchedulerImpl - Removed TaskSet 1244.0, whose tasks have all completed, from pool 20:45:58.403 INFO DAGScheduler - ResultStage 1244 (runJob at SparkHadoopWriter.scala:83) finished in 0.025 s 20:45:58.403 INFO DAGScheduler - Job 561 is finished. Cancelling potential speculative or zombie tasks for this job 20:45:58.403 INFO TaskSchedulerImpl - Killing all running tasks in stage 1244: Stage finished 20:45:58.403 INFO DAGScheduler - Job 561 finished: runJob at SparkHadoopWriter.scala:83, took 0.033548 s 20:45:58.403 INFO SparkHadoopWriter - Start to commit write Job job_202506122045581041024714695165288_5538. 20:45:58.407 INFO SparkHadoopWriter - Write Job job_202506122045581041024714695165288_5538 committed. Elapsed time: 3 ms. 20:45:58.414 INFO HadoopFileSystemWrapper - Concatenating 2 parts to /tmp/MarkDups9836824339496587886/MarkDups.sam 20:45:58.418 INFO HadoopFileSystemWrapper - Concatenating to /tmp/MarkDups9836824339496587886/MarkDups.sam done 20:45:58.418 INFO MarkDuplicatesSpark - Shutting down engine [June 12, 2025 at 8:45:58 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:58 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:59 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:59 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:45:59 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:00 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:00 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:00 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:01 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:01 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:01 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:01 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:02 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:02 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:02 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:03 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:03 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:03 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:04 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 20:46:04.420 ERROR Executor - Exception in task 0.0 in stage 1464.0 (TID 1223) org.broadinstitute.hellbender.exceptions.UserException$ReadMissingReadGroup: SAM/BAM/CRAM file (unknown) is malformed: Read RUNID:7:1203:2886:82292 is missing the read group (RG) tag, which is required by the GATK. Please use http://gatkforums.broadinstitute.org/discussion/59/companion-utilities-replacereadgroups to fix this problem at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.getLibraryForRead(MarkDuplicatesSparkUtils.java:64) ~[main/:?] at org.broadinstitute.hellbender.utils.read.markduplicates.sparkrecords.EmptyFragment.<init>(EmptyFragment.java:36) ~[main/:?] at org.broadinstitute.hellbender.utils.read.markduplicates.sparkrecords.MarkDuplicatesSparkRecord.newEmptyFragment(MarkDuplicatesSparkRecord.java:44) ~[main/:?] at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.lambda$transformToDuplicateNames$0(MarkDuplicatesSparkUtils.java:141) ~[main/:?] at java.base/java.util.stream.ReferencePipeline$15$1.accept(ReferencePipeline.java:540) ~[?:?] at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?] at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?] at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?] at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) ~[?:?] at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?] at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) ~[?:?] at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.lambda$transformToDuplicateNames$5ac2632f$1(MarkDuplicatesSparkUtils.java:150) ~[main/:?] at org.apache.spark.api.java.JavaRDDLike.$anonfun$flatMapToPair$1(JavaRDDLike.scala:143) ~[spark-core_2.13-3.5.0.jar:3.5.0] at scala.collection.Iterator$$anon$10.nextCur(Iterator.scala:594) ~[scala-library-2.13.14.jar:?] at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:608) ~[scala-library-2.13.14.jar:?] at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:104) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.scheduler.Task.run(Task.scala:141) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) ~[spark-common-utils_2.13-3.5.0.jar:3.5.0] at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) ~[spark-common-utils_2.13-3.5.0.jar:3.5.0] at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623) [spark-core_2.13-3.5.0.jar:3.5.0] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at java.base/java.lang.Thread.run(Thread.java:833) [?:?] 20:46:04.437 ERROR TaskSetManager - Task 0 in stage 1464.0 failed 1 times; aborting job [June 12, 2025 at 8:46:04 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 20:46:04.662 ERROR Executor - Exception in task 0.0 in stage 1471.0 (TID 1226) org.broadinstitute.hellbender.exceptions.UserException$HeaderMissingReadGroup: SAM/BAM/CRAM file (unknown) is malformed: Read RUNID:7:1203:2886:82292 contains an (RG) tag with the group NotADuplicateGroup which is not found in the file header. at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.getLibraryForRead(MarkDuplicatesSparkUtils.java:66) ~[main/:?] at org.broadinstitute.hellbender.utils.read.markduplicates.sparkrecords.EmptyFragment.<init>(EmptyFragment.java:36) ~[main/:?] at org.broadinstitute.hellbender.utils.read.markduplicates.sparkrecords.MarkDuplicatesSparkRecord.newEmptyFragment(MarkDuplicatesSparkRecord.java:44) ~[main/:?] at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.lambda$transformToDuplicateNames$0(MarkDuplicatesSparkUtils.java:141) ~[main/:?] at java.base/java.util.stream.ReferencePipeline$15$1.accept(ReferencePipeline.java:540) ~[?:?] at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?] at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?] at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?] at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) ~[?:?] at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?] at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) ~[?:?] at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.lambda$transformToDuplicateNames$5ac2632f$1(MarkDuplicatesSparkUtils.java:150) ~[main/:?] at org.apache.spark.api.java.JavaRDDLike.$anonfun$flatMapToPair$1(JavaRDDLike.scala:143) ~[spark-core_2.13-3.5.0.jar:3.5.0] at scala.collection.Iterator$$anon$10.nextCur(Iterator.scala:594) ~[scala-library-2.13.14.jar:?] at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:608) ~[scala-library-2.13.14.jar:?] at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:104) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.scheduler.Task.run(Task.scala:141) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) ~[spark-common-utils_2.13-3.5.0.jar:3.5.0] at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) ~[spark-common-utils_2.13-3.5.0.jar:3.5.0] at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94) ~[spark-core_2.13-3.5.0.jar:3.5.0] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623) [spark-core_2.13-3.5.0.jar:3.5.0] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at java.base/java.lang.Thread.run(Thread.java:833) [?:?] 20:46:04.664 ERROR TaskSetManager - Task 0 in stage 1471.0 failed 1 times; aborting job [June 12, 2025 at 8:46:04 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:04 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:05 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:05 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:05 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:06 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:06 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:06 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:07 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:07 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:07 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:08 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:08 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:08 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:09 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:09 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:09 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:10 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:10 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:10 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:11 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:11 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:11 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:11 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:12 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:12 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:12 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:13 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:13 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:13 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:14 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:14 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:14 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:15 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:15 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:15 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:16 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:16 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:16 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:17 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:17 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:17 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:17 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:18 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:18 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:18 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:19 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:19 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:19 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:20 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:20 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:20 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:21 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:21 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:21 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:22 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:22 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:22 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:23 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:23 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:23 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:23 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:24 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:24 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:24 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:25 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:25 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:25 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:26 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:26 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:26 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:27 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:27 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:27 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:28 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:28 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:28 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:28 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:29 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:29 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:29 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008 [June 12, 2025 at 8:46:30 PM UTC] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1738539008