Class org.broadinstitute.hellbender.tools.spark.pipelines.MarkDuplicatesSparkIntegrationTest
|
100%
successful |
Tests
Test | Duration | Result |
---|---|---|
assertThatExpectedOutputUpdateToggleIsDisabled | 0s | passed |
testAssertAssumeUnsortedFilesAreQueryGroupedFiles | 0.538s | passed |
testAssertCorrectSortOrderMultipleBams | 0.153s | passed |
testAssertCorrectSortOrderMultipleBamsOverriding | 0.498s | passed |
testBulkFragmentsNoDuplicates | 1.114s | passed |
testBulkFragmentsWithDuplicates | 2.144s | passed |
testDifferentChromosomesInOppositeOrder | 0.325s | passed |
testDuplicateDetectionDataProviderWithMetrics[0](src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/example.chr1.1-1K.unmarkedDups.bam, src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/expected.picard-2.15.0.sorted.chr1.1-1K.unmarkedDups.markDuplicate.metrics) | 0.268s | passed |
testDuplicateDetectionDataProviderWithMetrics[1](src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/optical_dupes.bam, src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/expected.picard-2.15.0.sorted.optical_dupes.markDuplicate.metrics) | 0.263s | passed |
testDuplicateDetectionDataProviderWithMetrics[2](src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/inputSingleLibrarySolexa16404.bam, src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/expected.picard-2.15.0.sorted.inputSingleLibrarySolexa16404.metrics) | 0.261s | passed |
testFlowModeFlag | 1.721s | passed |
testHashCollisionHandling | 1.083s | passed |
testMDOrder[0](src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/mdOrderBug.bam, src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/expected.mdOrderBug.bam) | 0.291s | passed |
testMDOrder[1](src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/mdOrderBug2.bam, src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/expected.mdOrderBug2.bam) | 0.305s | passed |
testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[0](true, true, true) | 0.317s | passed |
testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[1](true, false, true) | 0.307s | passed |
testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[2](false, true, true) | 0.313s | passed |
testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[3](true, true, false) | 0.316s | passed |
testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[4](true, false, false) | 0.312s | passed |
testManyOpticalDuplicateClusterOneEndSamePositionOneCluster[5](false, true, false) | 0.328s | passed |
testMappedFragmentAndMappedPairFirstOfPairNonPrimary | 0.306s | passed |
testMappedFragmentAndMatePairFirstUnmapped | 0.332s | passed |
testMappedFragmentAndMatePairSecondUnmapped | 0.325s | passed |
testMappedPairAndMappedFragmentAndMatePairFirstUnmapped | 0.328s | passed |
testMappedPairAndMappedFragmentAndMatePairSecondUnmapped | 0.351s | passed |
testMappedPairAndMatePairFirstOppositeStrandSecondUnmapped | 0.322s | passed |
testMappedPairAndMatePairFirstUnmapped | 0.321s | passed |
testMappedPairAndMatePairSecondUnmapped | 0.312s | passed |
testMappedPairWithFirstEndSamePositionAndOther | 0.334s | passed |
testMappedPairWithSamePosition | 0.317s | passed |
testMappedPairWithSamePositionSameCigar | 0.312s | passed |
testMarkDuplicatesSparkDeletingDuplicateReads[0]([Ljava.io.File;@7c39b9d, 20, 0, {Solexa-16419=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16416=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16404=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16406=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16412=[0, 1, 0, 0, 0, 0, 0.0, 0]}) | 0.404s | passed |
testMarkDuplicatesSparkDeletingDuplicateReads[1]([Ljava.io.File;@5a60a99, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.425s | passed |
testMarkDuplicatesSparkDeletingDuplicateReads[2]([Ljava.io.File;@21ca9450, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.374s | passed |
testMarkDuplicatesSparkDeletingDuplicateReads[3]([Ljava.io.File;@6196f8cd, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.375s | passed |
testMarkDuplicatesSparkDeletingDuplicateReads[4]([Ljava.io.File;@71e52ca5, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.350s | passed |
testMarkDuplicatesSparkDeletingDuplicateReads[5]([Ljava.io.File;@2505b595, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.243s | passed |
testMarkDuplicatesSparkDeletingDuplicateReads[6]([Ljava.io.File;@513e3e5c, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.223s | passed |
testMarkDuplicatesSparkDeletingOpticalDuplicateReads[0]([Ljava.io.File;@4421bc34, 20, 0, {Solexa-16419=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16416=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16404=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16406=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16412=[0, 1, 0, 0, 0, 0, 0.0, 0]}) | 0.379s | passed |
testMarkDuplicatesSparkDeletingOpticalDuplicateReads[1]([Ljava.io.File;@5b78f250, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.453s | passed |
testMarkDuplicatesSparkDeletingOpticalDuplicateReads[2]([Ljava.io.File;@81fd05d, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.461s | passed |
testMarkDuplicatesSparkDeletingOpticalDuplicateReads[3]([Ljava.io.File;@765b6c12, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.332s | passed |
testMarkDuplicatesSparkDeletingOpticalDuplicateReads[4]([Ljava.io.File;@1cf43cfc, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.366s | passed |
testMarkDuplicatesSparkDeletingOpticalDuplicateReads[5]([Ljava.io.File;@cc3330e, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.190s | passed |
testMarkDuplicatesSparkDeletingOpticalDuplicateReads[6]([Ljava.io.File;@4c04b43b, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.155s | passed |
testMarkDuplicatesSparkIntegrationTestLocal[0]([Ljava.io.File;@2bf8b0df, 20, 0, {Solexa-16419=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16416=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16404=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16406=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16412=[0, 1, 0, 0, 0, 0, 0.0, 0]}) | 0.374s | passed |
testMarkDuplicatesSparkIntegrationTestLocal[1]([Ljava.io.File;@5d6fddd2, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.355s | passed |
testMarkDuplicatesSparkIntegrationTestLocal[2]([Ljava.io.File;@12224d29, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.377s | passed |
testMarkDuplicatesSparkIntegrationTestLocal[3]([Ljava.io.File;@43ec5fd9, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.342s | passed |
testMarkDuplicatesSparkIntegrationTestLocal[4]([Ljava.io.File;@293a50e1, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.350s | passed |
testMarkDuplicatesSparkIntegrationTestLocal[5]([Ljava.io.File;@20aa5be3, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.217s | passed |
testMarkDuplicatesSparkIntegrationTestLocal[6]([Ljava.io.File;@2d369b6c, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.250s | passed |
testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[0]([Ljava.io.File;@7bdbf22c, 20, 0, {Solexa-16419=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16416=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16404=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16406=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16412=[0, 1, 0, 0, 0, 0, 0.0, 0]}) | 0.364s | passed |
testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[1]([Ljava.io.File;@b7c2273, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.366s | passed |
testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[2]([Ljava.io.File;@15deb362, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.368s | passed |
testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[3]([Ljava.io.File;@6d12aeee, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.333s | passed |
testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[4]([Ljava.io.File;@66aefd1e, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.327s | passed |
testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[5]([Ljava.io.File;@241566e6, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.119s | passed |
testMarkDuplicatesSparkMarkingAllDuplicatesWithTagging[6]([Ljava.io.File;@272fa755, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.144s | passed |
testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[0]([Ljava.io.File;@760a6523, 20, 0, {Solexa-16419=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16416=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16404=[0, 3, 0, 0, 0, 0, 0.0, 0], Solexa-16406=[0, 1, 0, 0, 0, 0, 0.0, 0], Solexa-16412=[0, 1, 0, 0, 0, 0, 0.0, 0]}) | 0.350s | passed |
testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[1]([Ljava.io.File;@5e135f36, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.354s | passed |
testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[2]([Ljava.io.File;@39b869f2, 90, 6, {Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 0.378s | passed |
testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[3]([Ljava.io.File;@440c8b84, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.329s | passed |
testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[4]([Ljava.io.File;@72d51c59, 4, 2, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0]}) | 0.315s | passed |
testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[5]([Ljava.io.File;@7b83d501, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.125s | passed |
testMarkDuplicatesSparkMarkingOpticalDuplicatesWithTagging[6]([Ljava.io.File;@7a711913, 94, 8, {mylib=[0, 2, 0, 0, 1, 1, 0.5, 0], Solexa-16419=[4, 4, 4, 0, 0, 0, 0.0, 0], Solexa-16416=[2, 2, 2, 0, 0, 0, 0.0, 0], Solexa-16404=[3, 9, 3, 0, 2, 0, 0.190476, 17], Solexa-16406=[1, 10, 1, 0, 0, 0, 0.0, 0], Solexa-16412=[3, 6, 3, 0, 1, 0, 0.133333, 15]}) | 1.165s | passed |
testMatePairFirstUnmapped | 0.308s | passed |
testMatePairSecondUnmapped | 0.309s | passed |
testNoReadGroupInRead | 0.250s | passed |
testNonExistantReadGroupInRead | 0.239s | passed |
testNullOpticalDuplicates | 0.323s | passed |
testOpticalDuplicateClusterOneEndSamePositionOneCluster | 0.320s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicatesWithinPixelDistance | 0.306s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[0](RUNID:7:1203:2886:82292, RUNID:7:1205:3886:16834) | 0.329s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[10](RUNID:7:1203:6886:16756, RUNID:7:1203:4886:26756) | 0.309s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[11](RUNID:7:1203:2886:34756, RUNID:7:1203:2886:36756) | 0.311s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[12](RUNID:7:1203:2886:35756, RUNID:7:1203:2886:36756) | 0.308s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[13](RUNID:7:1203:2886:37756, RUNID:7:1203:2886:36756) | 0.313s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[14](RUNID:7:1203:2886:38756, RUNID:7:1203:2886:36756) | 0.313s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[15](HK3T5CCXX160204:3:1112:11586:37067, HK3T5CCXX160204:3:1112:11586:32144) | 0.311s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[1](RUNID:7:1203:2886:16756, RUNID:7:1205:3886:16756) | 0.314s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[2](RUNID:7:1204:2886:16756, RUNID:7:1205:3886:16756) | 0.315s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[3](RUNID:7:1205:2886:16756, RUNID:7:1205:3886:16756) | 0.320s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[4](RUNID:7:1206:2886:16756, RUNID:7:1205:3886:16756) | 0.341s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[5](RUNID:7:1207:2886:16756, RUNID:7:1205:3886:16756) | 0.328s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[6](RUNID:7:1203:2886:16756, RUNID:7:1203:4886:26756) | 0.309s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[7](RUNID:7:1203:3886:16756, RUNID:7:1203:4886:26756) | 0.308s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[8](RUNID:7:1203:4886:16756, RUNID:7:1203:4886:26756) | 0.321s | passed |
testOpticalDuplicateClusterSamePositionNoOpticalDuplicates[9](RUNID:7:1203:5886:16756, RUNID:7:1203:4886:26756) | 0.308s | passed |
testOpticalDuplicateClusterSamePositionOneOpticalDuplicatesWithinPixelDistance | 0.315s | passed |
testOpticalDuplicateClustersAddingSecondEndFirstSameCoordinate | 0.308s | passed |
testOpticalDuplicateFinding | 0.305s | passed |
testOpticalDuplicateFindingPxelDistance | 0.317s | passed |
testOpticalDuplicatesAndPCRDuplicatesOrientationDifference | 0.313s | passed |
testOpticalDuplicatesDifferentReadGroups | 0.303s | passed |
testOpticalDuplicatesTheSameReadGroup | 0.312s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[0](RUNID:7:1203:2886:82292, RUNID:7:1205:3886:16834) | 0.308s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[10](RUNID:7:1203:6886:16756, RUNID:7:1203:4886:26756) | 0.301s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[11](RUNID:7:1203:2886:34756, RUNID:7:1203:2886:36756) | 0.324s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[12](RUNID:7:1203:2886:35756, RUNID:7:1203:2886:36756) | 0.300s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[13](RUNID:7:1203:2886:37756, RUNID:7:1203:2886:36756) | 0.303s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[14](RUNID:7:1203:2886:38756, RUNID:7:1203:2886:36756) | 0.319s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[15](HK3T5CCXX160204:3:1112:11586:37067, HK3T5CCXX160204:3:1112:11586:32144) | 0.300s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[1](RUNID:7:1203:2886:16756, RUNID:7:1205:3886:16756) | 0.306s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[2](RUNID:7:1204:2886:16756, RUNID:7:1205:3886:16756) | 0.310s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[3](RUNID:7:1205:2886:16756, RUNID:7:1205:3886:16756) | 0.308s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[4](RUNID:7:1206:2886:16756, RUNID:7:1205:3886:16756) | 0.306s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[5](RUNID:7:1207:2886:16756, RUNID:7:1205:3886:16756) | 0.312s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[6](RUNID:7:1203:2886:16756, RUNID:7:1203:4886:26756) | 0.303s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[7](RUNID:7:1203:3886:16756, RUNID:7:1203:4886:26756) | 0.308s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[8](RUNID:7:1203:4886:16756, RUNID:7:1203:4886:26756) | 0.318s | passed |
testOpticalDuplicatesTiebrokenByPhysicalLocationNotStartPosition[9](RUNID:7:1203:5886:16756, RUNID:7:1203:4886:26756) | 0.307s | passed |
testPathologicalOrderingAtTheSamePosition | 0.306s | passed |
testReadSameStartPositionOrientationOverride | 0.331s | passed |
testReadsHaveSameStartPositionButDifferentChromosomeNonEquivalence | 0.307s | passed |
testSecondEndIsBeforeFirstInCoordinate | 0.300s | passed |
testSingleMappedFragment | 0.311s | passed |
testSingleMappedFragmentAndSingleMappedPair | 0.314s | passed |
testSingleMappedFragmentAndTwoMappedPairs | 0.299s | passed |
testSingleMappedPair | 0.331s | passed |
testSingleUnmappedFragment | 0.300s | passed |
testSingleUnmappedPair | 0.298s | passed |
testStackOverFlowPairSetSwap | 0.318s | passed |
testSupplementaryReadUnmappedMate | 0.304s | passed |
testThreeGroupsOnDifferentChromosomesOfThreeMappedPairs | 0.302s | passed |
testThreeMappedPairs | 0.322s | passed |
testThreeMappedPairsWithMatchingSecondMate | 0.309s | passed |
testTwoGroupsOnDifferentChromosomesOfThreeMappedPairs | 0.303s | passed |
testTwoGroupsOnDifferentChromosomesOfTwoFragments | 0.330s | passed |
testTwoGroupsOnDifferentChromosomesOfTwoMappedPairs | 0.332s | passed |
testTwoMappedFragments | 0.300s | passed |
testTwoMappedPairWithSamePosition | 0.311s | passed |
testTwoMappedPairWithSamePositionDifferentStrands | 0.297s | passed |
testTwoMappedPairWithSamePositionDifferentStrands2 | 0.302s | passed |
testTwoMappedPairs | 0.307s | passed |
testTwoMappedPairsAndMappedSecondaryFragment | 0.308s | passed |
testTwoMappedPairsAndTerminalUnmappedFragment | 0.307s | passed |
testTwoMappedPairsAndTerminalUnmappedPair | 0.308s | passed |
testTwoMappedPairsMatesSoftClipped | 0.302s | passed |
testTwoMappedPairsWithOppositeOrientations | 0.307s | passed |
testTwoMappedPairsWithOppositeOrientationsNumberTwo | 0.302s | passed |
testTwoMappedPairsWithSoftClipping | 0.300s | passed |
testTwoMappedPairsWithSoftClippingBoth | 0.316s | passed |
testTwoMappedPairsWithSoftClippingFirstOfPairOnly | 0.298s | passed |
testTwoMappedPairsWithSoftClippingFirstOfPairOnlyNoMateCigar | 0.306s | passed |
testTwoMappedPairsWithSupplementaryReads[0](true, true, true) | 0.336s | passed |
testTwoMappedPairsWithSupplementaryReads[1](true, false, true) | 0.324s | passed |
testTwoMappedPairsWithSupplementaryReads[2](false, true, true) | 0.311s | passed |
testTwoMappedPairsWithSupplementaryReads[3](true, true, false) | 0.307s | passed |
testTwoMappedPairsWithSupplementaryReads[4](true, false, false) | 0.305s | passed |
testTwoMappedPairsWithSupplementaryReads[5](false, true, false) | 0.298s | passed |
testTwoUnmappedFragments | 0.302s | passed |
Standard error
[June 12, 2025 at 8:59:53 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=918552576 [June 12, 2025 at 8:59:53 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=918552576 [June 12, 2025 at 8:59:53 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=918552576 [June 12, 2025 at 8:59:55 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.03 minutes. Runtime.totalMemory()=926941184 [June 12, 2025 at 8:59:56 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=926941184 20:59:56.770 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:59:56.770 INFO MarkDuplicatesSpark - The Genome Analysis Toolkit (GATK) v092bdf2-SNAPSHOT 20:59:56.770 INFO MarkDuplicatesSpark - For support and documentation go to https://software.broadinstitute.org/gatk/ 20:59:56.771 INFO MarkDuplicatesSpark - Executing as root@e24cea241624 on Linux v6.11.0-1015-azure amd64 20:59:56.771 INFO MarkDuplicatesSpark - Java runtime: OpenJDK 64-Bit Server VM v17.0.12+7-Ubuntu-1ubuntu222.04 20:59:56.771 INFO MarkDuplicatesSpark - Start Date/Time: June 12, 2025 at 8:59:56 PM GMT 20:59:56.771 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:59:56.771 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:59:56.771 INFO MarkDuplicatesSpark - HTSJDK Version: 4.2.0 20:59:56.771 INFO MarkDuplicatesSpark - Picard Version: 3.4.0 20:59:56.771 INFO MarkDuplicatesSpark - Built for Spark Version: 3.5.0 20:59:56.771 INFO MarkDuplicatesSpark - HTSJDK Defaults.COMPRESSION_LEVEL : 2 20:59:56.771 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 20:59:56.771 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 20:59:56.771 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 20:59:56.771 INFO MarkDuplicatesSpark - Deflater: IntelDeflater 20:59:56.771 INFO MarkDuplicatesSpark - Inflater: IntelInflater 20:59:56.772 INFO MarkDuplicatesSpark - GCS max retries/reopens: 20 20:59:56.772 INFO MarkDuplicatesSpark - Requester pays: disabled 20:59:56.772 INFO MarkDuplicatesSpark - Initializing engine 20:59:56.772 INFO MarkDuplicatesSpark - Done initializing engine 20:59:56.772 INFO MarkDuplicatesSpark - Spark verbosity set to INFO (see --spark-verbosity argument) 20:59:56.774 INFO MemoryStore - Block broadcast_1319 stored as values in memory (estimated size 305.5 KiB, free 1905.4 MiB) 20:59:56.781 INFO MemoryStore - Block broadcast_1319_piece0 stored as bytes in memory (estimated size 64.0 KiB, free 1905.3 MiB) 20:59:56.781 INFO BlockManagerInfo - Added broadcast_1319_piece0 in memory on localhost:43253 (size: 64.0 KiB, free: 1919.4 MiB) 20:59:56.781 INFO SparkContext - Created broadcast 1319 from newAPIHadoopFile at PathSplitSource.java:96 20:59:56.801 INFO MemoryStore - Block broadcast_1320 stored as values in memory (estimated size 305.5 KiB, free 1905.0 MiB) 20:59:56.808 INFO MemoryStore - Block broadcast_1320_piece0 stored as bytes in memory (estimated size 64.0 KiB, free 1904.9 MiB) 20:59:56.808 INFO BlockManagerInfo - Added broadcast_1320_piece0 in memory on localhost:43253 (size: 64.0 KiB, free: 1919.4 MiB) 20:59:56.809 INFO SparkContext - Created broadcast 1320 from newAPIHadoopFile at PathSplitSource.java:96 20:59:56.841 INFO SparkContext - Starting job: collect at SparkUtils.java:205 20:59:56.841 INFO FileInputFormat - Total input files to process : 1 20:59:56.842 INFO DAGScheduler - Registering RDD 3369 (mapToPair at SparkUtils.java:161) as input to shuffle 94 20:59:56.842 INFO DAGScheduler - Got job 407 (collect at SparkUtils.java:205) with 1 output partitions 20:59:56.842 INFO DAGScheduler - Final stage: ResultStage 712 (collect at SparkUtils.java:205) 20:59:56.842 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 711) 20:59:56.843 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 711) 20:59:56.843 INFO DAGScheduler - Submitting ShuffleMapStage 711 (MapPartitionsRDD[3369] at mapToPair at SparkUtils.java:161), which has no missing parents 20:59:56.860 INFO MemoryStore - Block broadcast_1321 stored as values in memory (estimated size 459.2 KiB, free 1904.5 MiB) 20:59:56.863 INFO MemoryStore - Block broadcast_1321_piece0 stored as bytes in memory (estimated size 201.4 KiB, free 1904.3 MiB) 20:59:56.863 INFO BlockManagerInfo - Added broadcast_1321_piece0 in memory on localhost:43253 (size: 201.4 KiB, free: 1919.2 MiB) 20:59:56.863 INFO SparkContext - Created broadcast 1321 from broadcast at DAGScheduler.scala:1580 20:59:56.863 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 711 (MapPartitionsRDD[3369] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 20:59:56.863 INFO TaskSchedulerImpl - Adding task set 711.0 with 1 tasks resource profile 0 20:59:56.864 INFO TaskSetManager - Starting task 0.0 in stage 711.0 (TID 641) (localhost, executor driver, partition 0, PROCESS_LOCAL, 9650 bytes) 20:59:56.865 INFO Executor - Running task 0.0 in stage 711.0 (TID 641) 20:59:56.892 INFO NewHadoopRDD - Input split: file:/gatkCloneMountPoint/src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/mdOrderBug.bam:0+1507 20:59:56.896 INFO Executor - Finished task 0.0 in stage 711.0 (TID 641). 1148 bytes result sent to driver 20:59:56.897 INFO TaskSetManager - Finished task 0.0 in stage 711.0 (TID 641) in 33 ms on localhost (executor driver) (1/1) 20:59:56.897 INFO TaskSchedulerImpl - Removed TaskSet 711.0, whose tasks have all completed, from pool 20:59:56.897 INFO DAGScheduler - ShuffleMapStage 711 (mapToPair at SparkUtils.java:161) finished in 0.054 s 20:59:56.897 INFO DAGScheduler - looking for newly runnable stages 20:59:56.897 INFO DAGScheduler - running: HashSet() 20:59:56.897 INFO DAGScheduler - waiting: HashSet(ResultStage 712) 20:59:56.897 INFO DAGScheduler - failed: HashSet() 20:59:56.897 INFO DAGScheduler - Submitting ResultStage 712 (MapPartitionsRDD[3372] at mapPartitions at SparkUtils.java:188), which has no missing parents 20:59:56.898 INFO MemoryStore - Block broadcast_1322 stored as values in memory (estimated size 7.4 KiB, free 1904.3 MiB) 20:59:56.899 INFO MemoryStore - Block broadcast_1322_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1904.3 MiB) 20:59:56.899 INFO BlockManagerInfo - Added broadcast_1322_piece0 in memory on localhost:43253 (size: 4.1 KiB, free: 1919.2 MiB) 20:59:56.899 INFO SparkContext - Created broadcast 1322 from broadcast at DAGScheduler.scala:1580 20:59:56.899 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 712 (MapPartitionsRDD[3372] at mapPartitions at SparkUtils.java:188) (first 15 tasks are for partitions Vector(0)) 20:59:56.899 INFO TaskSchedulerImpl - Adding task set 712.0 with 1 tasks resource profile 0 20:59:56.900 INFO TaskSetManager - Starting task 0.0 in stage 712.0 (TID 642) (localhost, executor driver, partition 0, NODE_LOCAL, 9299 bytes) 20:59:56.900 INFO Executor - Running task 0.0 in stage 712.0 (TID 642) 20:59:56.901 INFO ShuffleBlockFetcherIterator - Getting 1 (405.0 B) non-empty blocks including 1 (405.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:56.901 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:56.903 INFO Executor - Finished task 0.0 in stage 712.0 (TID 642). 2238 bytes result sent to driver 20:59:56.904 INFO TaskSetManager - Finished task 0.0 in stage 712.0 (TID 642) in 4 ms on localhost (executor driver) (1/1) 20:59:56.904 INFO TaskSchedulerImpl - Removed TaskSet 712.0, whose tasks have all completed, from pool 20:59:56.904 INFO DAGScheduler - ResultStage 712 (collect at SparkUtils.java:205) finished in 0.006 s 20:59:56.904 INFO DAGScheduler - Job 407 is finished. Cancelling potential speculative or zombie tasks for this job 20:59:56.904 INFO TaskSchedulerImpl - Killing all running tasks in stage 712: Stage finished 20:59:56.904 INFO DAGScheduler - Job 407 finished: collect at SparkUtils.java:205, took 0.063360 s 20:59:56.910 INFO MemoryStore - Block broadcast_1323 stored as values in memory (estimated size 1632.0 B, free 1904.3 MiB) 20:59:56.910 INFO MemoryStore - Block broadcast_1323_piece0 stored as bytes in memory (estimated size 145.0 B, free 1904.3 MiB) 20:59:56.911 INFO BlockManagerInfo - Added broadcast_1323_piece0 in memory on localhost:43253 (size: 145.0 B, free: 1919.2 MiB) 20:59:56.911 INFO SparkContext - Created broadcast 1323 from broadcast at MarkDuplicatesSparkUtils.java:126 20:59:56.911 INFO MemoryStore - Block broadcast_1324 stored as values in memory (estimated size 304.0 B, free 1904.3 MiB) 20:59:56.911 INFO MemoryStore - Block broadcast_1324_piece0 stored as bytes in memory (estimated size 37.0 B, free 1904.3 MiB) 20:59:56.911 INFO BlockManagerInfo - Added broadcast_1324_piece0 in memory on localhost:43253 (size: 37.0 B, free: 1919.2 MiB) 20:59:56.912 INFO SparkContext - Created broadcast 1324 from broadcast at MarkDuplicatesSparkUtils.java:127 20:59:56.927 INFO SparkContext - Starting job: collectAsMap at MarkDuplicatesSparkUtils.java:542 20:59:56.928 INFO DAGScheduler - Registering RDD 3381 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) as input to shuffle 97 20:59:56.928 INFO DAGScheduler - Registering RDD 3385 (mapToPair at MarkDuplicatesSpark.java:217) as input to shuffle 96 20:59:56.928 INFO DAGScheduler - Registering RDD 3389 (mapToPair at MarkDuplicatesSparkUtils.java:497) as input to shuffle 95 20:59:56.928 INFO DAGScheduler - Got job 408 (collectAsMap at MarkDuplicatesSparkUtils.java:542) with 1 output partitions 20:59:56.928 INFO DAGScheduler - Final stage: ResultStage 717 (collectAsMap at MarkDuplicatesSparkUtils.java:542) 20:59:56.928 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 716) 20:59:56.928 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 716) 20:59:56.929 INFO DAGScheduler - Submitting ShuffleMapStage 714 (MapPartitionsRDD[3381] at flatMapToPair at MarkDuplicatesSparkUtils.java:130), which has no missing parents 20:59:56.930 INFO MemoryStore - Block broadcast_1325 stored as values in memory (estimated size 24.2 KiB, free 1904.3 MiB) 20:59:56.930 INFO MemoryStore - Block broadcast_1325_piece0 stored as bytes in memory (estimated size 12.6 KiB, free 1904.2 MiB) 20:59:56.930 INFO BlockManagerInfo - Added broadcast_1325_piece0 in memory on localhost:43253 (size: 12.6 KiB, free: 1919.2 MiB) 20:59:56.931 INFO SparkContext - Created broadcast 1325 from broadcast at DAGScheduler.scala:1580 20:59:56.931 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 714 (MapPartitionsRDD[3381] at flatMapToPair at MarkDuplicatesSparkUtils.java:130) (first 15 tasks are for partitions Vector(0)) 20:59:56.931 INFO TaskSchedulerImpl - Adding task set 714.0 with 1 tasks resource profile 0 20:59:56.931 INFO TaskSetManager - Starting task 0.0 in stage 714.0 (TID 643) (localhost, executor driver, partition 0, NODE_LOCAL, 9813 bytes) 20:59:56.932 INFO Executor - Running task 0.0 in stage 714.0 (TID 643) 20:59:56.933 INFO ShuffleBlockFetcherIterator - Getting 1 (405.0 B) non-empty blocks including 1 (405.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:56.934 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:56.937 INFO Executor - Finished task 0.0 in stage 714.0 (TID 643). 1922 bytes result sent to driver 20:59:56.937 INFO TaskSetManager - Finished task 0.0 in stage 714.0 (TID 643) in 6 ms on localhost (executor driver) (1/1) 20:59:56.937 INFO TaskSchedulerImpl - Removed TaskSet 714.0, whose tasks have all completed, from pool 20:59:56.937 INFO DAGScheduler - ShuffleMapStage 714 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) finished in 0.008 s 20:59:56.937 INFO DAGScheduler - looking for newly runnable stages 20:59:56.937 INFO DAGScheduler - running: HashSet() 20:59:56.937 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 715, ShuffleMapStage 716, ResultStage 717) 20:59:56.937 INFO DAGScheduler - failed: HashSet() 20:59:56.938 INFO DAGScheduler - Submitting ShuffleMapStage 715 (MapPartitionsRDD[3385] at mapToPair at MarkDuplicatesSpark.java:217), which has no missing parents 20:59:56.938 INFO MemoryStore - Block broadcast_1326 stored as values in memory (estimated size 28.3 KiB, free 1904.2 MiB) 20:59:56.939 INFO MemoryStore - Block broadcast_1326_piece0 stored as bytes in memory (estimated size 14.8 KiB, free 1904.2 MiB) 20:59:56.939 INFO BlockManagerInfo - Added broadcast_1326_piece0 in memory on localhost:43253 (size: 14.8 KiB, free: 1919.1 MiB) 20:59:56.939 INFO SparkContext - Created broadcast 1326 from broadcast at DAGScheduler.scala:1580 20:59:56.939 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 715 (MapPartitionsRDD[3385] at mapToPair at MarkDuplicatesSpark.java:217) (first 15 tasks are for partitions Vector(0)) 20:59:56.939 INFO TaskSchedulerImpl - Adding task set 715.0 with 1 tasks resource profile 0 20:59:56.940 INFO TaskSetManager - Starting task 0.0 in stage 715.0 (TID 644) (localhost, executor driver, partition 0, NODE_LOCAL, 9288 bytes) 20:59:56.940 INFO Executor - Running task 0.0 in stage 715.0 (TID 644) 20:59:56.942 INFO ShuffleBlockFetcherIterator - Getting 1 (129.0 B) non-empty blocks including 1 (129.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:56.942 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:56.945 INFO Executor - Finished task 0.0 in stage 715.0 (TID 644). 1922 bytes result sent to driver 20:59:56.945 INFO TaskSetManager - Finished task 0.0 in stage 715.0 (TID 644) in 5 ms on localhost (executor driver) (1/1) 20:59:56.945 INFO TaskSchedulerImpl - Removed TaskSet 715.0, whose tasks have all completed, from pool 20:59:56.945 INFO DAGScheduler - ShuffleMapStage 715 (mapToPair at MarkDuplicatesSpark.java:217) finished in 0.007 s 20:59:56.946 INFO DAGScheduler - looking for newly runnable stages 20:59:56.946 INFO DAGScheduler - running: HashSet() 20:59:56.946 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 716, ResultStage 717) 20:59:56.946 INFO DAGScheduler - failed: HashSet() 20:59:56.946 INFO DAGScheduler - Submitting ShuffleMapStage 716 (MapPartitionsRDD[3389] at mapToPair at MarkDuplicatesSparkUtils.java:497), which has no missing parents 20:59:56.946 INFO MemoryStore - Block broadcast_1327 stored as values in memory (estimated size 19.0 KiB, free 1904.2 MiB) 20:59:56.947 INFO MemoryStore - Block broadcast_1327_piece0 stored as bytes in memory (estimated size 9.7 KiB, free 1904.2 MiB) 20:59:56.947 INFO BlockManagerInfo - Added broadcast_1327_piece0 in memory on localhost:43253 (size: 9.7 KiB, free: 1919.1 MiB) 20:59:56.947 INFO SparkContext - Created broadcast 1327 from broadcast at DAGScheduler.scala:1580 20:59:56.948 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 716 (MapPartitionsRDD[3389] at mapToPair at MarkDuplicatesSparkUtils.java:497) (first 15 tasks are for partitions Vector(0)) 20:59:56.948 INFO TaskSchedulerImpl - Adding task set 716.0 with 1 tasks resource profile 0 20:59:56.948 INFO TaskSetManager - Starting task 0.0 in stage 716.0 (TID 645) (localhost, executor driver, partition 0, NODE_LOCAL, 9871 bytes) 20:59:56.948 INFO Executor - Running task 0.0 in stage 716.0 (TID 645) 20:59:56.950 INFO ShuffleBlockFetcherIterator - Getting 1 (405.0 B) non-empty blocks including 1 (405.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:56.950 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:56.952 INFO ShuffleBlockFetcherIterator - Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:56.952 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:56.953 INFO Executor - Finished task 0.0 in stage 716.0 (TID 645). 1922 bytes result sent to driver 20:59:56.953 INFO TaskSetManager - Finished task 0.0 in stage 716.0 (TID 645) in 5 ms on localhost (executor driver) (1/1) 20:59:56.953 INFO TaskSchedulerImpl - Removed TaskSet 716.0, whose tasks have all completed, from pool 20:59:56.954 INFO DAGScheduler - ShuffleMapStage 716 (mapToPair at MarkDuplicatesSparkUtils.java:497) finished in 0.008 s 20:59:56.954 INFO DAGScheduler - looking for newly runnable stages 20:59:56.954 INFO DAGScheduler - running: HashSet() 20:59:56.954 INFO DAGScheduler - waiting: HashSet(ResultStage 717) 20:59:56.954 INFO DAGScheduler - failed: HashSet() 20:59:56.954 INFO DAGScheduler - Submitting ResultStage 717 (MapPartitionsRDD[3391] at mapValues at MarkDuplicatesSparkUtils.java:519), which has no missing parents 20:59:56.955 INFO MemoryStore - Block broadcast_1328 stored as values in memory (estimated size 20.2 KiB, free 1904.2 MiB) 20:59:56.955 INFO MemoryStore - Block broadcast_1328_piece0 stored as bytes in memory (estimated size 10.6 KiB, free 1904.1 MiB) 20:59:56.955 INFO BlockManagerInfo - Added broadcast_1328_piece0 in memory on localhost:43253 (size: 10.6 KiB, free: 1919.1 MiB) 20:59:56.956 INFO SparkContext - Created broadcast 1328 from broadcast at DAGScheduler.scala:1580 20:59:56.956 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 717 (MapPartitionsRDD[3391] at mapValues at MarkDuplicatesSparkUtils.java:519) (first 15 tasks are for partitions Vector(0)) 20:59:56.956 INFO TaskSchedulerImpl - Adding task set 717.0 with 1 tasks resource profile 0 20:59:56.956 INFO TaskSetManager - Starting task 0.0 in stage 717.0 (TID 646) (localhost, executor driver, partition 0, NODE_LOCAL, 9299 bytes) 20:59:56.956 INFO Executor - Running task 0.0 in stage 717.0 (TID 646) 20:59:56.958 INFO ShuffleBlockFetcherIterator - Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:56.958 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:56.961 INFO Executor - Finished task 0.0 in stage 717.0 (TID 646). 1944 bytes result sent to driver 20:59:56.961 INFO TaskSetManager - Finished task 0.0 in stage 717.0 (TID 646) in 5 ms on localhost (executor driver) (1/1) 20:59:56.961 INFO TaskSchedulerImpl - Removed TaskSet 717.0, whose tasks have all completed, from pool 20:59:56.961 INFO DAGScheduler - ResultStage 717 (collectAsMap at MarkDuplicatesSparkUtils.java:542) finished in 0.007 s 20:59:56.961 INFO DAGScheduler - Job 408 is finished. Cancelling potential speculative or zombie tasks for this job 20:59:56.962 INFO TaskSchedulerImpl - Killing all running tasks in stage 717: Stage finished 20:59:56.962 INFO DAGScheduler - Job 408 finished: collectAsMap at MarkDuplicatesSparkUtils.java:542, took 0.034219 s 20:59:56.971 INFO MemoryStore - Block broadcast_1329 stored as values in memory (estimated size 20.3 KiB, free 1904.1 MiB) 20:59:56.972 INFO MemoryStore - Block broadcast_1329_piece0 stored as bytes in memory (estimated size 1850.0 B, free 1904.1 MiB) 20:59:56.972 INFO BlockManagerInfo - Added broadcast_1329_piece0 in memory on localhost:43253 (size: 1850.0 B, free: 1919.1 MiB) 20:59:56.972 INFO SparkContext - Created broadcast 1329 from broadcast at ReadsSparkSink.java:133 20:59:56.973 INFO MemoryStore - Block broadcast_1330 stored as values in memory (estimated size 20.4 KiB, free 1904.1 MiB) 20:59:56.974 INFO MemoryStore - Block broadcast_1330_piece0 stored as bytes in memory (estimated size 1850.0 B, free 1904.1 MiB) 20:59:56.974 INFO BlockManagerInfo - Added broadcast_1330_piece0 in memory on localhost:43253 (size: 1850.0 B, free: 1919.1 MiB) 20:59:56.974 INFO SparkContext - Created broadcast 1330 from broadcast at BamSink.java:76 20:59:56.976 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 20:59:56.976 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:59:56.976 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:59:56.987 INFO SparkContext - Starting job: runJob at SparkHadoopWriter.scala:83 20:59:56.988 INFO DAGScheduler - Registering RDD 3393 (mapToPair at SparkUtils.java:161) as input to shuffle 98 20:59:56.988 INFO DAGScheduler - Got job 409 (runJob at SparkHadoopWriter.scala:83) with 1 output partitions 20:59:56.988 INFO DAGScheduler - Final stage: ResultStage 722 (runJob at SparkHadoopWriter.scala:83) 20:59:56.988 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 721) 20:59:56.988 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 721) 20:59:56.988 INFO DAGScheduler - Submitting ShuffleMapStage 721 (MapPartitionsRDD[3393] at mapToPair at SparkUtils.java:161), which has no missing parents 20:59:56.989 INFO MemoryStore - Block broadcast_1331 stored as values in memory (estimated size 18.1 KiB, free 1904.1 MiB) 20:59:56.990 INFO MemoryStore - Block broadcast_1331_piece0 stored as bytes in memory (estimated size 9.5 KiB, free 1904.1 MiB) 20:59:56.990 INFO BlockManagerInfo - Added broadcast_1331_piece0 in memory on localhost:43253 (size: 9.5 KiB, free: 1919.1 MiB) 20:59:56.990 INFO SparkContext - Created broadcast 1331 from broadcast at DAGScheduler.scala:1580 20:59:56.990 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 721 (MapPartitionsRDD[3393] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 20:59:56.991 INFO TaskSchedulerImpl - Adding task set 721.0 with 1 tasks resource profile 0 20:59:56.991 INFO TaskSetManager - Starting task 0.0 in stage 721.0 (TID 647) (localhost, executor driver, partition 0, NODE_LOCAL, 9871 bytes) 20:59:56.992 INFO Executor - Running task 0.0 in stage 721.0 (TID 647) 20:59:56.994 INFO ShuffleBlockFetcherIterator - Getting 1 (405.0 B) non-empty blocks including 1 (405.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:56.994 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:56.996 INFO ShuffleBlockFetcherIterator - Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:56.996 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:56.998 INFO Executor - Finished task 0.0 in stage 721.0 (TID 647). 1922 bytes result sent to driver 20:59:56.998 INFO TaskSetManager - Finished task 0.0 in stage 721.0 (TID 647) in 7 ms on localhost (executor driver) (1/1) 20:59:56.999 INFO TaskSchedulerImpl - Removed TaskSet 721.0, whose tasks have all completed, from pool 20:59:56.999 INFO DAGScheduler - ShuffleMapStage 721 (mapToPair at SparkUtils.java:161) finished in 0.010 s 20:59:56.999 INFO DAGScheduler - looking for newly runnable stages 20:59:56.999 INFO DAGScheduler - running: HashSet() 20:59:56.999 INFO DAGScheduler - waiting: HashSet(ResultStage 722) 20:59:56.999 INFO DAGScheduler - failed: HashSet() 20:59:56.999 INFO DAGScheduler - Submitting ResultStage 722 (MapPartitionsRDD[3398] at mapToPair at BamSink.java:91), which has no missing parents 20:59:57.006 INFO MemoryStore - Block broadcast_1332 stored as values in memory (estimated size 163.3 KiB, free 1903.9 MiB) 20:59:57.007 INFO MemoryStore - Block broadcast_1332_piece0 stored as bytes in memory (estimated size 73.5 KiB, free 1903.8 MiB) 20:59:57.007 INFO BlockManagerInfo - Added broadcast_1332_piece0 in memory on localhost:43253 (size: 73.5 KiB, free: 1919.0 MiB) 20:59:57.008 INFO SparkContext - Created broadcast 1332 from broadcast at DAGScheduler.scala:1580 20:59:57.008 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 722 (MapPartitionsRDD[3398] at mapToPair at BamSink.java:91) (first 15 tasks are for partitions Vector(0)) 20:59:57.008 INFO TaskSchedulerImpl - Adding task set 722.0 with 1 tasks resource profile 0 20:59:57.008 INFO TaskSetManager - Starting task 0.0 in stage 722.0 (TID 648) (localhost, executor driver, partition 0, NODE_LOCAL, 9299 bytes) 20:59:57.009 INFO Executor - Running task 0.0 in stage 722.0 (TID 648) 20:59:57.012 INFO ShuffleBlockFetcherIterator - Getting 1 (405.0 B) non-empty blocks including 1 (405.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:57.012 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:57.015 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 20:59:57.015 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:59:57.015 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:59:57.015 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 20:59:57.015 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:59:57.015 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:59:57.026 INFO FileOutputCommitter - Saved output of task 'attempt_202506122059562838821482652645179_3398_r_000000_0' to file:/tmp/local18970223784451453307/markdups4085689366490708010.bam.parts/_temporary/0/task_202506122059562838821482652645179_3398_r_000000 20:59:57.026 INFO SparkHadoopMapRedUtil - attempt_202506122059562838821482652645179_3398_r_000000_0: Committed. Elapsed time: 0 ms. 20:59:57.026 INFO Executor - Finished task 0.0 in stage 722.0 (TID 648). 1858 bytes result sent to driver 20:59:57.026 INFO TaskSetManager - Finished task 0.0 in stage 722.0 (TID 648) in 18 ms on localhost (executor driver) (1/1) 20:59:57.027 INFO TaskSchedulerImpl - Removed TaskSet 722.0, whose tasks have all completed, from pool 20:59:57.027 INFO DAGScheduler - ResultStage 722 (runJob at SparkHadoopWriter.scala:83) finished in 0.028 s 20:59:57.027 INFO DAGScheduler - Job 409 is finished. Cancelling potential speculative or zombie tasks for this job 20:59:57.027 INFO TaskSchedulerImpl - Killing all running tasks in stage 722: Stage finished 20:59:57.027 INFO DAGScheduler - Job 409 finished: runJob at SparkHadoopWriter.scala:83, took 0.039591 s 20:59:57.027 INFO SparkHadoopWriter - Start to commit write Job job_202506122059562838821482652645179_3398. 20:59:57.030 INFO SparkHadoopWriter - Write Job job_202506122059562838821482652645179_3398 committed. Elapsed time: 2 ms. 20:59:57.037 INFO HadoopFileSystemWrapper - Concatenating 3 parts to /tmp/local18970223784451453307/markdups4085689366490708010.bam 20:59:57.039 INFO HadoopFileSystemWrapper - Concatenating to /tmp/local18970223784451453307/markdups4085689366490708010.bam done 20:59:57.039 INFO IndexFileMerger - Merging .sbi files in temp directory /tmp/local18970223784451453307/markdups4085689366490708010.bam.parts/ to /tmp/local18970223784451453307/markdups4085689366490708010.bam.sbi 20:59:57.042 INFO IndexFileMerger - Done merging .sbi files 20:59:57.042 INFO IndexFileMerger - Merging .bai files in temp directory /tmp/local18970223784451453307/markdups4085689366490708010.bam.parts/ to /tmp/local18970223784451453307/markdups4085689366490708010.bam.bai 20:59:57.044 INFO IndexFileMerger - Done merging .bai files 20:59:57.045 INFO MarkDuplicatesSpark - Shutting down engine [June 12, 2025 at 8:59:57 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=926941184 20:59:57.062 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:59:57.062 INFO MarkDuplicatesSpark - The Genome Analysis Toolkit (GATK) v092bdf2-SNAPSHOT 20:59:57.062 INFO MarkDuplicatesSpark - For support and documentation go to https://software.broadinstitute.org/gatk/ 20:59:57.063 INFO MarkDuplicatesSpark - Executing as root@e24cea241624 on Linux v6.11.0-1015-azure amd64 20:59:57.063 INFO MarkDuplicatesSpark - Java runtime: OpenJDK 64-Bit Server VM v17.0.12+7-Ubuntu-1ubuntu222.04 20:59:57.063 INFO MarkDuplicatesSpark - Start Date/Time: June 12, 2025 at 8:59:57 PM GMT 20:59:57.063 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:59:57.063 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 20:59:57.063 INFO MarkDuplicatesSpark - HTSJDK Version: 4.2.0 20:59:57.063 INFO MarkDuplicatesSpark - Picard Version: 3.4.0 20:59:57.063 INFO MarkDuplicatesSpark - Built for Spark Version: 3.5.0 20:59:57.063 INFO MarkDuplicatesSpark - HTSJDK Defaults.COMPRESSION_LEVEL : 2 20:59:57.063 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 20:59:57.063 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 20:59:57.063 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 20:59:57.063 INFO MarkDuplicatesSpark - Deflater: IntelDeflater 20:59:57.063 INFO MarkDuplicatesSpark - Inflater: IntelInflater 20:59:57.063 INFO MarkDuplicatesSpark - GCS max retries/reopens: 20 20:59:57.063 INFO MarkDuplicatesSpark - Requester pays: disabled 20:59:57.063 INFO MarkDuplicatesSpark - Initializing engine 20:59:57.064 INFO MarkDuplicatesSpark - Done initializing engine 20:59:57.064 INFO MarkDuplicatesSpark - Spark verbosity set to INFO (see --spark-verbosity argument) 20:59:57.066 INFO MemoryStore - Block broadcast_1333 stored as values in memory (estimated size 305.5 KiB, free 1903.5 MiB) 20:59:57.072 INFO MemoryStore - Block broadcast_1333_piece0 stored as bytes in memory (estimated size 64.0 KiB, free 1903.5 MiB) 20:59:57.073 INFO BlockManagerInfo - Added broadcast_1333_piece0 in memory on localhost:43253 (size: 64.0 KiB, free: 1919.0 MiB) 20:59:57.073 INFO SparkContext - Created broadcast 1333 from newAPIHadoopFile at PathSplitSource.java:96 20:59:57.093 INFO MemoryStore - Block broadcast_1334 stored as values in memory (estimated size 305.5 KiB, free 1903.2 MiB) 20:59:57.099 INFO MemoryStore - Block broadcast_1334_piece0 stored as bytes in memory (estimated size 64.0 KiB, free 1903.1 MiB) 20:59:57.099 INFO BlockManagerInfo - Added broadcast_1334_piece0 in memory on localhost:43253 (size: 64.0 KiB, free: 1918.9 MiB) 20:59:57.100 INFO SparkContext - Created broadcast 1334 from newAPIHadoopFile at PathSplitSource.java:96 20:59:57.131 INFO SparkContext - Starting job: collect at SparkUtils.java:205 20:59:57.131 INFO FileInputFormat - Total input files to process : 1 20:59:57.131 INFO DAGScheduler - Registering RDD 3412 (mapToPair at SparkUtils.java:161) as input to shuffle 99 20:59:57.132 INFO DAGScheduler - Got job 410 (collect at SparkUtils.java:205) with 1 output partitions 20:59:57.132 INFO DAGScheduler - Final stage: ResultStage 724 (collect at SparkUtils.java:205) 20:59:57.132 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 723) 20:59:57.132 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 723) 20:59:57.132 INFO DAGScheduler - Submitting ShuffleMapStage 723 (MapPartitionsRDD[3412] at mapToPair at SparkUtils.java:161), which has no missing parents 20:59:57.149 INFO MemoryStore - Block broadcast_1335 stored as values in memory (estimated size 459.2 KiB, free 1902.7 MiB) 20:59:57.157 INFO BlockManagerInfo - Removed broadcast_1311_piece0 on localhost:43253 in memory (size: 19.0 B, free: 1918.9 MiB) 20:59:57.158 INFO BlockManagerInfo - Removed broadcast_1331_piece0 on localhost:43253 in memory (size: 9.5 KiB, free: 1918.9 MiB) 20:59:57.159 INFO MemoryStore - Block broadcast_1335_piece0 stored as bytes in memory (estimated size 201.4 KiB, free 1902.5 MiB) 20:59:57.159 INFO BlockManagerInfo - Added broadcast_1335_piece0 in memory on localhost:43253 (size: 201.4 KiB, free: 1918.7 MiB) 20:59:57.159 INFO SparkContext - Created broadcast 1335 from broadcast at DAGScheduler.scala:1580 20:59:57.159 INFO BlockManagerInfo - Removed broadcast_1329_piece0 on localhost:43253 in memory (size: 1850.0 B, free: 1918.7 MiB) 20:59:57.159 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 723 (MapPartitionsRDD[3412] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 20:59:57.159 INFO TaskSchedulerImpl - Adding task set 723.0 with 1 tasks resource profile 0 20:59:57.160 INFO BlockManagerInfo - Removed broadcast_1322_piece0 on localhost:43253 in memory (size: 4.1 KiB, free: 1918.7 MiB) 20:59:57.160 INFO TaskSetManager - Starting task 0.0 in stage 723.0 (TID 649) (localhost, executor driver, partition 0, PROCESS_LOCAL, 9651 bytes) 20:59:57.160 INFO BlockManagerInfo - Removed broadcast_1319_piece0 on localhost:43253 in memory (size: 64.0 KiB, free: 1918.8 MiB) 20:59:57.160 INFO Executor - Running task 0.0 in stage 723.0 (TID 649) 20:59:57.161 INFO BlockManagerInfo - Removed broadcast_1328_piece0 on localhost:43253 in memory (size: 10.6 KiB, free: 1918.8 MiB) 20:59:57.162 INFO BlockManagerInfo - Removed broadcast_1320_piece0 on localhost:43253 in memory (size: 64.0 KiB, free: 1918.9 MiB) 20:59:57.162 INFO BlockManagerInfo - Removed broadcast_1312_piece0 on localhost:43253 in memory (size: 20.0 B, free: 1918.9 MiB) 20:59:57.162 INFO BlockManagerInfo - Removed broadcast_1313_piece0 on localhost:43253 in memory (size: 228.2 KiB, free: 1919.1 MiB) 20:59:57.163 INFO BlockManagerInfo - Removed broadcast_1324_piece0 on localhost:43253 in memory (size: 37.0 B, free: 1919.1 MiB) 20:59:57.163 INFO BlockManagerInfo - Removed broadcast_1332_piece0 on localhost:43253 in memory (size: 73.5 KiB, free: 1919.2 MiB) 20:59:57.163 INFO BlockManagerInfo - Removed broadcast_1309_piece0 on localhost:43253 in memory (size: 64.1 KiB, free: 1919.2 MiB) 20:59:57.164 INFO BlockManagerInfo - Removed broadcast_1314_piece0 on localhost:43253 in memory (size: 228.2 KiB, free: 1919.4 MiB) 20:59:57.164 INFO BlockManagerInfo - Removed broadcast_1330_piece0 on localhost:43253 in memory (size: 1850.0 B, free: 1919.4 MiB) 20:59:57.164 INFO BlockManagerInfo - Removed broadcast_1327_piece0 on localhost:43253 in memory (size: 9.7 KiB, free: 1919.5 MiB) 20:59:57.165 INFO BlockManagerInfo - Removed broadcast_1321_piece0 on localhost:43253 in memory (size: 201.4 KiB, free: 1919.7 MiB) 20:59:57.166 INFO BlockManagerInfo - Removed broadcast_1326_piece0 on localhost:43253 in memory (size: 14.8 KiB, free: 1919.7 MiB) 20:59:57.166 INFO BlockManagerInfo - Removed broadcast_1333_piece0 on localhost:43253 in memory (size: 64.0 KiB, free: 1919.7 MiB) 20:59:57.166 INFO BlockManagerInfo - Removed broadcast_1325_piece0 on localhost:43253 in memory (size: 12.6 KiB, free: 1919.7 MiB) 20:59:57.168 INFO BlockManagerInfo - Removed broadcast_1323_piece0 on localhost:43253 in memory (size: 145.0 B, free: 1919.7 MiB) 20:59:57.199 INFO NewHadoopRDD - Input split: file:/gatkCloneMountPoint/src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/mdOrderBug2.bam:0+1856 20:59:57.204 INFO Executor - Finished task 0.0 in stage 723.0 (TID 649). 1148 bytes result sent to driver 20:59:57.205 INFO TaskSetManager - Finished task 0.0 in stage 723.0 (TID 649) in 45 ms on localhost (executor driver) (1/1) 20:59:57.205 INFO DAGScheduler - ShuffleMapStage 723 (mapToPair at SparkUtils.java:161) finished in 0.073 s 20:59:57.205 INFO DAGScheduler - looking for newly runnable stages 20:59:57.205 INFO DAGScheduler - running: HashSet() 20:59:57.205 INFO TaskSchedulerImpl - Removed TaskSet 723.0, whose tasks have all completed, from pool 20:59:57.205 INFO DAGScheduler - waiting: HashSet(ResultStage 724) 20:59:57.205 INFO DAGScheduler - failed: HashSet() 20:59:57.205 INFO DAGScheduler - Submitting ResultStage 724 (MapPartitionsRDD[3415] at mapPartitions at SparkUtils.java:188), which has no missing parents 20:59:57.206 INFO MemoryStore - Block broadcast_1336 stored as values in memory (estimated size 7.4 KiB, free 1919.0 MiB) 20:59:57.206 INFO MemoryStore - Block broadcast_1336_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1919.0 MiB) 20:59:57.207 INFO BlockManagerInfo - Added broadcast_1336_piece0 in memory on localhost:43253 (size: 4.1 KiB, free: 1919.7 MiB) 20:59:57.207 INFO SparkContext - Created broadcast 1336 from broadcast at DAGScheduler.scala:1580 20:59:57.207 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 724 (MapPartitionsRDD[3415] at mapPartitions at SparkUtils.java:188) (first 15 tasks are for partitions Vector(0)) 20:59:57.207 INFO TaskSchedulerImpl - Adding task set 724.0 with 1 tasks resource profile 0 20:59:57.208 INFO TaskSetManager - Starting task 0.0 in stage 724.0 (TID 650) (localhost, executor driver, partition 0, NODE_LOCAL, 9299 bytes) 20:59:57.208 INFO Executor - Running task 0.0 in stage 724.0 (TID 650) 20:59:57.209 INFO ShuffleBlockFetcherIterator - Getting 1 (868.0 B) non-empty blocks including 1 (868.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:57.210 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:57.212 INFO Executor - Finished task 0.0 in stage 724.0 (TID 650). 2495 bytes result sent to driver 20:59:57.213 INFO TaskSetManager - Finished task 0.0 in stage 724.0 (TID 650) in 5 ms on localhost (executor driver) (1/1) 20:59:57.213 INFO TaskSchedulerImpl - Removed TaskSet 724.0, whose tasks have all completed, from pool 20:59:57.213 INFO DAGScheduler - ResultStage 724 (collect at SparkUtils.java:205) finished in 0.007 s 20:59:57.213 INFO DAGScheduler - Job 410 is finished. Cancelling potential speculative or zombie tasks for this job 20:59:57.213 INFO TaskSchedulerImpl - Killing all running tasks in stage 724: Stage finished 20:59:57.213 INFO DAGScheduler - Job 410 finished: collect at SparkUtils.java:205, took 0.082444 s 20:59:57.219 INFO MemoryStore - Block broadcast_1337 stored as values in memory (estimated size 1632.0 B, free 1919.0 MiB) 20:59:57.220 INFO MemoryStore - Block broadcast_1337_piece0 stored as bytes in memory (estimated size 145.0 B, free 1919.0 MiB) 20:59:57.220 INFO BlockManagerInfo - Added broadcast_1337_piece0 in memory on localhost:43253 (size: 145.0 B, free: 1919.7 MiB) 20:59:57.220 INFO SparkContext - Created broadcast 1337 from broadcast at MarkDuplicatesSparkUtils.java:126 20:59:57.220 INFO MemoryStore - Block broadcast_1338 stored as values in memory (estimated size 304.0 B, free 1919.0 MiB) 20:59:57.221 INFO MemoryStore - Block broadcast_1338_piece0 stored as bytes in memory (estimated size 37.0 B, free 1919.0 MiB) 20:59:57.221 INFO BlockManagerInfo - Added broadcast_1338_piece0 in memory on localhost:43253 (size: 37.0 B, free: 1919.7 MiB) 20:59:57.221 INFO SparkContext - Created broadcast 1338 from broadcast at MarkDuplicatesSparkUtils.java:127 20:59:57.234 INFO SparkContext - Starting job: collectAsMap at MarkDuplicatesSparkUtils.java:542 20:59:57.235 INFO DAGScheduler - Registering RDD 3424 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) as input to shuffle 102 20:59:57.235 INFO DAGScheduler - Registering RDD 3428 (mapToPair at MarkDuplicatesSpark.java:217) as input to shuffle 101 20:59:57.235 INFO DAGScheduler - Registering RDD 3432 (mapToPair at MarkDuplicatesSparkUtils.java:497) as input to shuffle 100 20:59:57.235 INFO DAGScheduler - Got job 411 (collectAsMap at MarkDuplicatesSparkUtils.java:542) with 1 output partitions 20:59:57.235 INFO DAGScheduler - Final stage: ResultStage 729 (collectAsMap at MarkDuplicatesSparkUtils.java:542) 20:59:57.235 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 728) 20:59:57.235 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 728) 20:59:57.235 INFO DAGScheduler - Submitting ShuffleMapStage 726 (MapPartitionsRDD[3424] at flatMapToPair at MarkDuplicatesSparkUtils.java:130), which has no missing parents 20:59:57.236 INFO MemoryStore - Block broadcast_1339 stored as values in memory (estimated size 24.2 KiB, free 1919.0 MiB) 20:59:57.237 INFO MemoryStore - Block broadcast_1339_piece0 stored as bytes in memory (estimated size 12.6 KiB, free 1918.9 MiB) 20:59:57.237 INFO BlockManagerInfo - Added broadcast_1339_piece0 in memory on localhost:43253 (size: 12.6 KiB, free: 1919.7 MiB) 20:59:57.237 INFO SparkContext - Created broadcast 1339 from broadcast at DAGScheduler.scala:1580 20:59:57.238 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 726 (MapPartitionsRDD[3424] at flatMapToPair at MarkDuplicatesSparkUtils.java:130) (first 15 tasks are for partitions Vector(0)) 20:59:57.238 INFO TaskSchedulerImpl - Adding task set 726.0 with 1 tasks resource profile 0 20:59:57.238 INFO TaskSetManager - Starting task 0.0 in stage 726.0 (TID 651) (localhost, executor driver, partition 0, NODE_LOCAL, 9813 bytes) 20:59:57.239 INFO Executor - Running task 0.0 in stage 726.0 (TID 651) 20:59:57.240 INFO ShuffleBlockFetcherIterator - Getting 1 (868.0 B) non-empty blocks including 1 (868.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:57.240 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:57.244 INFO Executor - Finished task 0.0 in stage 726.0 (TID 651). 1922 bytes result sent to driver 20:59:57.245 INFO TaskSetManager - Finished task 0.0 in stage 726.0 (TID 651) in 7 ms on localhost (executor driver) (1/1) 20:59:57.245 INFO TaskSchedulerImpl - Removed TaskSet 726.0, whose tasks have all completed, from pool 20:59:57.245 INFO DAGScheduler - ShuffleMapStage 726 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) finished in 0.009 s 20:59:57.245 INFO DAGScheduler - looking for newly runnable stages 20:59:57.245 INFO DAGScheduler - running: HashSet() 20:59:57.245 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 727, ShuffleMapStage 728, ResultStage 729) 20:59:57.245 INFO DAGScheduler - failed: HashSet() 20:59:57.245 INFO DAGScheduler - Submitting ShuffleMapStage 727 (MapPartitionsRDD[3428] at mapToPair at MarkDuplicatesSpark.java:217), which has no missing parents 20:59:57.246 INFO MemoryStore - Block broadcast_1340 stored as values in memory (estimated size 28.3 KiB, free 1918.9 MiB) 20:59:57.247 INFO MemoryStore - Block broadcast_1340_piece0 stored as bytes in memory (estimated size 14.8 KiB, free 1918.9 MiB) 20:59:57.247 INFO BlockManagerInfo - Added broadcast_1340_piece0 in memory on localhost:43253 (size: 14.8 KiB, free: 1919.7 MiB) 20:59:57.247 INFO SparkContext - Created broadcast 1340 from broadcast at DAGScheduler.scala:1580 20:59:57.247 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 727 (MapPartitionsRDD[3428] at mapToPair at MarkDuplicatesSpark.java:217) (first 15 tasks are for partitions Vector(0)) 20:59:57.247 INFO TaskSchedulerImpl - Adding task set 727.0 with 1 tasks resource profile 0 20:59:57.248 INFO TaskSetManager - Starting task 0.0 in stage 727.0 (TID 652) (localhost, executor driver, partition 0, NODE_LOCAL, 9288 bytes) 20:59:57.248 INFO Executor - Running task 0.0 in stage 727.0 (TID 652) 20:59:57.250 INFO ShuffleBlockFetcherIterator - Getting 1 (142.0 B) non-empty blocks including 1 (142.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:57.250 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:57.255 INFO Executor - Finished task 0.0 in stage 727.0 (TID 652). 1922 bytes result sent to driver 20:59:57.255 INFO TaskSetManager - Finished task 0.0 in stage 727.0 (TID 652) in 7 ms on localhost (executor driver) (1/1) 20:59:57.255 INFO TaskSchedulerImpl - Removed TaskSet 727.0, whose tasks have all completed, from pool 20:59:57.255 INFO DAGScheduler - ShuffleMapStage 727 (mapToPair at MarkDuplicatesSpark.java:217) finished in 0.010 s 20:59:57.255 INFO DAGScheduler - looking for newly runnable stages 20:59:57.255 INFO DAGScheduler - running: HashSet() 20:59:57.255 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 728, ResultStage 729) 20:59:57.255 INFO DAGScheduler - failed: HashSet() 20:59:57.256 INFO DAGScheduler - Submitting ShuffleMapStage 728 (MapPartitionsRDD[3432] at mapToPair at MarkDuplicatesSparkUtils.java:497), which has no missing parents 20:59:57.256 INFO MemoryStore - Block broadcast_1341 stored as values in memory (estimated size 19.0 KiB, free 1918.9 MiB) 20:59:57.257 INFO MemoryStore - Block broadcast_1341_piece0 stored as bytes in memory (estimated size 9.7 KiB, free 1918.9 MiB) 20:59:57.257 INFO BlockManagerInfo - Added broadcast_1341_piece0 in memory on localhost:43253 (size: 9.7 KiB, free: 1919.7 MiB) 20:59:57.257 INFO SparkContext - Created broadcast 1341 from broadcast at DAGScheduler.scala:1580 20:59:57.257 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 728 (MapPartitionsRDD[3432] at mapToPair at MarkDuplicatesSparkUtils.java:497) (first 15 tasks are for partitions Vector(0)) 20:59:57.257 INFO TaskSchedulerImpl - Adding task set 728.0 with 1 tasks resource profile 0 20:59:57.258 INFO TaskSetManager - Starting task 0.0 in stage 728.0 (TID 653) (localhost, executor driver, partition 0, NODE_LOCAL, 9871 bytes) 20:59:57.258 INFO Executor - Running task 0.0 in stage 728.0 (TID 653) 20:59:57.260 INFO ShuffleBlockFetcherIterator - Getting 1 (868.0 B) non-empty blocks including 1 (868.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:57.260 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:57.262 INFO ShuffleBlockFetcherIterator - Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:57.262 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:57.263 INFO Executor - Finished task 0.0 in stage 728.0 (TID 653). 1922 bytes result sent to driver 20:59:57.263 INFO TaskSetManager - Finished task 0.0 in stage 728.0 (TID 653) in 5 ms on localhost (executor driver) (1/1) 20:59:57.263 INFO TaskSchedulerImpl - Removed TaskSet 728.0, whose tasks have all completed, from pool 20:59:57.263 INFO DAGScheduler - ShuffleMapStage 728 (mapToPair at MarkDuplicatesSparkUtils.java:497) finished in 0.007 s 20:59:57.263 INFO DAGScheduler - looking for newly runnable stages 20:59:57.264 INFO DAGScheduler - running: HashSet() 20:59:57.264 INFO DAGScheduler - waiting: HashSet(ResultStage 729) 20:59:57.264 INFO DAGScheduler - failed: HashSet() 20:59:57.264 INFO DAGScheduler - Submitting ResultStage 729 (MapPartitionsRDD[3434] at mapValues at MarkDuplicatesSparkUtils.java:519), which has no missing parents 20:59:57.264 INFO MemoryStore - Block broadcast_1342 stored as values in memory (estimated size 20.2 KiB, free 1918.9 MiB) 20:59:57.265 INFO MemoryStore - Block broadcast_1342_piece0 stored as bytes in memory (estimated size 10.6 KiB, free 1918.8 MiB) 20:59:57.265 INFO BlockManagerInfo - Added broadcast_1342_piece0 in memory on localhost:43253 (size: 10.6 KiB, free: 1919.7 MiB) 20:59:57.265 INFO SparkContext - Created broadcast 1342 from broadcast at DAGScheduler.scala:1580 20:59:57.265 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 729 (MapPartitionsRDD[3434] at mapValues at MarkDuplicatesSparkUtils.java:519) (first 15 tasks are for partitions Vector(0)) 20:59:57.265 INFO TaskSchedulerImpl - Adding task set 729.0 with 1 tasks resource profile 0 20:59:57.266 INFO TaskSetManager - Starting task 0.0 in stage 729.0 (TID 654) (localhost, executor driver, partition 0, NODE_LOCAL, 9299 bytes) 20:59:57.266 INFO Executor - Running task 0.0 in stage 729.0 (TID 654) 20:59:57.267 INFO ShuffleBlockFetcherIterator - Getting 1 (142.0 B) non-empty blocks including 1 (142.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:57.267 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:57.269 INFO Executor - Finished task 0.0 in stage 729.0 (TID 654). 2010 bytes result sent to driver 20:59:57.269 INFO TaskSetManager - Finished task 0.0 in stage 729.0 (TID 654) in 3 ms on localhost (executor driver) (1/1) 20:59:57.269 INFO TaskSchedulerImpl - Removed TaskSet 729.0, whose tasks have all completed, from pool 20:59:57.269 INFO DAGScheduler - ResultStage 729 (collectAsMap at MarkDuplicatesSparkUtils.java:542) finished in 0.005 s 20:59:57.269 INFO DAGScheduler - Job 411 is finished. Cancelling potential speculative or zombie tasks for this job 20:59:57.269 INFO TaskSchedulerImpl - Killing all running tasks in stage 729: Stage finished 20:59:57.269 INFO DAGScheduler - Job 411 finished: collectAsMap at MarkDuplicatesSparkUtils.java:542, took 0.035129 s 20:59:57.281 INFO MemoryStore - Block broadcast_1343 stored as values in memory (estimated size 20.3 KiB, free 1918.8 MiB) 20:59:57.281 INFO MemoryStore - Block broadcast_1343_piece0 stored as bytes in memory (estimated size 1850.0 B, free 1918.8 MiB) 20:59:57.282 INFO BlockManagerInfo - Added broadcast_1343_piece0 in memory on localhost:43253 (size: 1850.0 B, free: 1919.7 MiB) 20:59:57.282 INFO SparkContext - Created broadcast 1343 from broadcast at ReadsSparkSink.java:133 20:59:57.282 INFO MemoryStore - Block broadcast_1344 stored as values in memory (estimated size 20.4 KiB, free 1918.8 MiB) 20:59:57.283 INFO MemoryStore - Block broadcast_1344_piece0 stored as bytes in memory (estimated size 1850.0 B, free 1918.8 MiB) 20:59:57.283 INFO BlockManagerInfo - Added broadcast_1344_piece0 in memory on localhost:43253 (size: 1850.0 B, free: 1919.7 MiB) 20:59:57.283 INFO SparkContext - Created broadcast 1344 from broadcast at BamSink.java:76 20:59:57.285 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 20:59:57.285 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:59:57.285 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:59:57.296 INFO SparkContext - Starting job: runJob at SparkHadoopWriter.scala:83 20:59:57.296 INFO DAGScheduler - Registering RDD 3436 (mapToPair at SparkUtils.java:161) as input to shuffle 103 20:59:57.296 INFO DAGScheduler - Got job 412 (runJob at SparkHadoopWriter.scala:83) with 1 output partitions 20:59:57.297 INFO DAGScheduler - Final stage: ResultStage 734 (runJob at SparkHadoopWriter.scala:83) 20:59:57.297 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 733) 20:59:57.297 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 733) 20:59:57.297 INFO DAGScheduler - Submitting ShuffleMapStage 733 (MapPartitionsRDD[3436] at mapToPair at SparkUtils.java:161), which has no missing parents 20:59:57.298 INFO MemoryStore - Block broadcast_1345 stored as values in memory (estimated size 18.1 KiB, free 1918.8 MiB) 20:59:57.298 INFO MemoryStore - Block broadcast_1345_piece0 stored as bytes in memory (estimated size 9.5 KiB, free 1918.8 MiB) 20:59:57.298 INFO BlockManagerInfo - Added broadcast_1345_piece0 in memory on localhost:43253 (size: 9.5 KiB, free: 1919.7 MiB) 20:59:57.298 INFO SparkContext - Created broadcast 1345 from broadcast at DAGScheduler.scala:1580 20:59:57.299 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 733 (MapPartitionsRDD[3436] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 20:59:57.299 INFO TaskSchedulerImpl - Adding task set 733.0 with 1 tasks resource profile 0 20:59:57.299 INFO TaskSetManager - Starting task 0.0 in stage 733.0 (TID 655) (localhost, executor driver, partition 0, NODE_LOCAL, 9871 bytes) 20:59:57.299 INFO Executor - Running task 0.0 in stage 733.0 (TID 655) 20:59:57.301 INFO ShuffleBlockFetcherIterator - Getting 1 (868.0 B) non-empty blocks including 1 (868.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:57.301 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:57.303 INFO ShuffleBlockFetcherIterator - Getting 1 (117.0 B) non-empty blocks including 1 (117.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:57.303 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:57.305 INFO Executor - Finished task 0.0 in stage 733.0 (TID 655). 1922 bytes result sent to driver 20:59:57.305 INFO TaskSetManager - Finished task 0.0 in stage 733.0 (TID 655) in 6 ms on localhost (executor driver) (1/1) 20:59:57.305 INFO TaskSchedulerImpl - Removed TaskSet 733.0, whose tasks have all completed, from pool 20:59:57.305 INFO DAGScheduler - ShuffleMapStage 733 (mapToPair at SparkUtils.java:161) finished in 0.008 s 20:59:57.305 INFO DAGScheduler - looking for newly runnable stages 20:59:57.305 INFO DAGScheduler - running: HashSet() 20:59:57.306 INFO DAGScheduler - waiting: HashSet(ResultStage 734) 20:59:57.306 INFO DAGScheduler - failed: HashSet() 20:59:57.306 INFO DAGScheduler - Submitting ResultStage 734 (MapPartitionsRDD[3441] at mapToPair at BamSink.java:91), which has no missing parents 20:59:57.312 INFO MemoryStore - Block broadcast_1346 stored as values in memory (estimated size 163.3 KiB, free 1918.6 MiB) 20:59:57.313 INFO MemoryStore - Block broadcast_1346_piece0 stored as bytes in memory (estimated size 73.4 KiB, free 1918.5 MiB) 20:59:57.313 INFO BlockManagerInfo - Added broadcast_1346_piece0 in memory on localhost:43253 (size: 73.4 KiB, free: 1919.6 MiB) 20:59:57.314 INFO SparkContext - Created broadcast 1346 from broadcast at DAGScheduler.scala:1580 20:59:57.314 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 734 (MapPartitionsRDD[3441] at mapToPair at BamSink.java:91) (first 15 tasks are for partitions Vector(0)) 20:59:57.314 INFO TaskSchedulerImpl - Adding task set 734.0 with 1 tasks resource profile 0 20:59:57.314 INFO TaskSetManager - Starting task 0.0 in stage 734.0 (TID 656) (localhost, executor driver, partition 0, NODE_LOCAL, 9299 bytes) 20:59:57.314 INFO Executor - Running task 0.0 in stage 734.0 (TID 656) 20:59:57.318 INFO ShuffleBlockFetcherIterator - Getting 1 (868.0 B) non-empty blocks including 1 (868.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 20:59:57.318 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 20:59:57.321 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 20:59:57.321 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:59:57.321 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:59:57.321 INFO PathOutputCommitterFactory - No output committer factory defined, defaulting to FileOutputCommitterFactory 20:59:57.321 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 20:59:57.321 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 20:59:57.331 INFO FileOutputCommitter - Saved output of task 'attempt_202506122059572991023011296141811_3441_r_000000_0' to file:/tmp/local18970223784451453307/markdups13604475886690836019.bam.parts/_temporary/0/task_202506122059572991023011296141811_3441_r_000000 20:59:57.331 INFO SparkHadoopMapRedUtil - attempt_202506122059572991023011296141811_3441_r_000000_0: Committed. Elapsed time: 0 ms. 20:59:57.331 INFO Executor - Finished task 0.0 in stage 734.0 (TID 656). 1858 bytes result sent to driver 20:59:57.332 INFO TaskSetManager - Finished task 0.0 in stage 734.0 (TID 656) in 18 ms on localhost (executor driver) (1/1) 20:59:57.332 INFO TaskSchedulerImpl - Removed TaskSet 734.0, whose tasks have all completed, from pool 20:59:57.332 INFO DAGScheduler - ResultStage 734 (runJob at SparkHadoopWriter.scala:83) finished in 0.026 s 20:59:57.332 INFO DAGScheduler - Job 412 is finished. Cancelling potential speculative or zombie tasks for this job 20:59:57.332 INFO TaskSchedulerImpl - Killing all running tasks in stage 734: Stage finished 20:59:57.332 INFO DAGScheduler - Job 412 finished: runJob at SparkHadoopWriter.scala:83, took 0.035935 s 20:59:57.332 INFO SparkHadoopWriter - Start to commit write Job job_202506122059572991023011296141811_3441. 20:59:57.335 INFO SparkHadoopWriter - Write Job job_202506122059572991023011296141811_3441 committed. Elapsed time: 2 ms. 20:59:57.341 INFO HadoopFileSystemWrapper - Concatenating 3 parts to /tmp/local18970223784451453307/markdups13604475886690836019.bam 20:59:57.343 INFO HadoopFileSystemWrapper - Concatenating to /tmp/local18970223784451453307/markdups13604475886690836019.bam done 20:59:57.343 INFO IndexFileMerger - Merging .sbi files in temp directory /tmp/local18970223784451453307/markdups13604475886690836019.bam.parts/ to /tmp/local18970223784451453307/markdups13604475886690836019.bam.sbi 20:59:57.346 INFO IndexFileMerger - Done merging .sbi files 20:59:57.346 INFO IndexFileMerger - Merging .bai files in temp directory /tmp/local18970223784451453307/markdups13604475886690836019.bam.parts/ to /tmp/local18970223784451453307/markdups13604475886690836019.bam.bai 20:59:57.349 INFO IndexFileMerger - Done merging .bai files 20:59:57.349 INFO MarkDuplicatesSpark - Shutting down engine [June 12, 2025 at 8:59:57 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 8:59:57 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 8:59:58 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 8:59:58 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 8:59:58 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 8:59:59 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 8:59:59 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:00 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:02 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:02 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:02 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:03 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:03 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:04 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:05 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:06 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:06 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:07 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:07 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:07 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:08 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:09 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:10 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:11 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:11 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:11 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:12 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:12 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:13 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:14 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:15 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:15 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:15 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:16 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:16 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:17 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:18 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:19 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:20 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:22 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.03 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:22 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 21:00:22.669 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 21:00:22.670 INFO MarkDuplicatesSpark - The Genome Analysis Toolkit (GATK) v092bdf2-SNAPSHOT 21:00:22.670 INFO MarkDuplicatesSpark - For support and documentation go to https://software.broadinstitute.org/gatk/ 21:00:22.670 INFO MarkDuplicatesSpark - Executing as root@e24cea241624 on Linux v6.11.0-1015-azure amd64 21:00:22.670 INFO MarkDuplicatesSpark - Java runtime: OpenJDK 64-Bit Server VM v17.0.12+7-Ubuntu-1ubuntu222.04 21:00:22.670 INFO MarkDuplicatesSpark - Start Date/Time: June 12, 2025 at 9:00:22 PM GMT 21:00:22.670 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 21:00:22.670 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 21:00:22.670 INFO MarkDuplicatesSpark - HTSJDK Version: 4.2.0 21:00:22.670 INFO MarkDuplicatesSpark - Picard Version: 3.4.0 21:00:22.670 INFO MarkDuplicatesSpark - Built for Spark Version: 3.5.0 21:00:22.670 INFO MarkDuplicatesSpark - HTSJDK Defaults.COMPRESSION_LEVEL : 2 21:00:22.670 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 21:00:22.670 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 21:00:22.670 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 21:00:22.670 INFO MarkDuplicatesSpark - Deflater: IntelDeflater 21:00:22.670 INFO MarkDuplicatesSpark - Inflater: IntelInflater 21:00:22.670 INFO MarkDuplicatesSpark - GCS max retries/reopens: 20 21:00:22.670 INFO MarkDuplicatesSpark - Requester pays: disabled 21:00:22.670 INFO MarkDuplicatesSpark - Initializing engine 21:00:22.670 INFO MarkDuplicatesSpark - Done initializing engine 21:00:22.670 INFO MarkDuplicatesSpark - Spark verbosity set to INFO (see --spark-verbosity argument) 21:00:22.672 INFO MemoryStore - Block broadcast_1923 stored as values in memory (estimated size 305.5 KiB, free 1917.6 MiB) 21:00:22.679 INFO MemoryStore - Block broadcast_1923_piece0 stored as bytes in memory (estimated size 64.0 KiB, free 1917.5 MiB) 21:00:22.679 INFO BlockManagerInfo - Added broadcast_1923_piece0 in memory on localhost:43253 (size: 64.0 KiB, free: 1919.4 MiB) 21:00:22.679 INFO SparkContext - Created broadcast 1923 from newAPIHadoopFile at PathSplitSource.java:96 21:00:22.700 INFO MemoryStore - Block broadcast_1924 stored as values in memory (estimated size 305.5 KiB, free 1917.2 MiB) 21:00:22.707 INFO MemoryStore - Block broadcast_1924_piece0 stored as bytes in memory (estimated size 64.0 KiB, free 1917.1 MiB) 21:00:22.707 INFO BlockManagerInfo - Added broadcast_1924_piece0 in memory on localhost:43253 (size: 64.0 KiB, free: 1919.4 MiB) 21:00:22.707 INFO SparkContext - Created broadcast 1924 from newAPIHadoopFile at PathSplitSource.java:96 21:00:22.740 INFO SparkContext - Starting job: collect at SparkUtils.java:205 21:00:22.740 INFO FileInputFormat - Total input files to process : 1 21:00:22.740 INFO DAGScheduler - Registering RDD 5333 (mapToPair at SparkUtils.java:161) as input to shuffle 294 21:00:22.740 INFO DAGScheduler - Got job 543 (collect at SparkUtils.java:205) with 1 output partitions 21:00:22.741 INFO DAGScheduler - Final stage: ResultStage 1196 (collect at SparkUtils.java:205) 21:00:22.741 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1195) 21:00:22.741 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1195) 21:00:22.741 INFO DAGScheduler - Submitting ShuffleMapStage 1195 (MapPartitionsRDD[5333] at mapToPair at SparkUtils.java:161), which has no missing parents 21:00:22.758 INFO MemoryStore - Block broadcast_1925 stored as values in memory (estimated size 492.6 KiB, free 1916.6 MiB) 21:00:22.760 INFO MemoryStore - Block broadcast_1925_piece0 stored as bytes in memory (estimated size 208.9 KiB, free 1916.4 MiB) 21:00:22.760 INFO BlockManagerInfo - Added broadcast_1925_piece0 in memory on localhost:43253 (size: 208.9 KiB, free: 1919.2 MiB) 21:00:22.760 INFO SparkContext - Created broadcast 1925 from broadcast at DAGScheduler.scala:1580 21:00:22.760 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1195 (MapPartitionsRDD[5333] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 21:00:22.760 INFO TaskSchedulerImpl - Adding task set 1195.0 with 1 tasks resource profile 0 21:00:22.761 INFO TaskSetManager - Starting task 0.0 in stage 1195.0 (TID 1027) (localhost, executor driver, partition 0, PROCESS_LOCAL, 9674 bytes) 21:00:22.761 INFO Executor - Running task 0.0 in stage 1195.0 (TID 1027) 21:00:22.788 INFO NewHadoopRDD - Input split: file:/gatkCloneMountPoint/src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/example.chr1.1-1K.unmarkedDups.bam:0+13412 21:00:22.793 INFO Executor - Finished task 0.0 in stage 1195.0 (TID 1027). 1148 bytes result sent to driver 21:00:22.793 INFO TaskSetManager - Finished task 0.0 in stage 1195.0 (TID 1027) in 32 ms on localhost (executor driver) (1/1) 21:00:22.794 INFO TaskSchedulerImpl - Removed TaskSet 1195.0, whose tasks have all completed, from pool 21:00:22.794 INFO DAGScheduler - ShuffleMapStage 1195 (mapToPair at SparkUtils.java:161) finished in 0.053 s 21:00:22.794 INFO DAGScheduler - looking for newly runnable stages 21:00:22.794 INFO DAGScheduler - running: HashSet() 21:00:22.794 INFO DAGScheduler - waiting: HashSet(ResultStage 1196) 21:00:22.794 INFO DAGScheduler - failed: HashSet() 21:00:22.794 INFO DAGScheduler - Submitting ResultStage 1196 (MapPartitionsRDD[5336] at mapPartitions at SparkUtils.java:188), which has no missing parents 21:00:22.794 INFO MemoryStore - Block broadcast_1926 stored as values in memory (estimated size 7.4 KiB, free 1916.4 MiB) 21:00:22.795 INFO MemoryStore - Block broadcast_1926_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1916.4 MiB) 21:00:22.795 INFO BlockManagerInfo - Added broadcast_1926_piece0 in memory on localhost:43253 (size: 4.1 KiB, free: 1919.1 MiB) 21:00:22.795 INFO SparkContext - Created broadcast 1926 from broadcast at DAGScheduler.scala:1580 21:00:22.795 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1196 (MapPartitionsRDD[5336] at mapPartitions at SparkUtils.java:188) (first 15 tasks are for partitions Vector(0)) 21:00:22.795 INFO TaskSchedulerImpl - Adding task set 1196.0 with 1 tasks resource profile 0 21:00:22.796 INFO TaskSetManager - Starting task 0.0 in stage 1196.0 (TID 1028) (localhost, executor driver, partition 0, NODE_LOCAL, 9299 bytes) 21:00:22.796 INFO Executor - Running task 0.0 in stage 1196.0 (TID 1028) 21:00:22.796 INFO ShuffleBlockFetcherIterator - Getting 1 (16.3 KiB) non-empty blocks including 1 (16.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:22.797 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:22.799 INFO Executor - Finished task 0.0 in stage 1196.0 (TID 1028). 2329 bytes result sent to driver 21:00:22.799 INFO TaskSetManager - Finished task 0.0 in stage 1196.0 (TID 1028) in 4 ms on localhost (executor driver) (1/1) 21:00:22.799 INFO TaskSchedulerImpl - Removed TaskSet 1196.0, whose tasks have all completed, from pool 21:00:22.799 INFO DAGScheduler - ResultStage 1196 (collect at SparkUtils.java:205) finished in 0.005 s 21:00:22.799 INFO DAGScheduler - Job 543 is finished. Cancelling potential speculative or zombie tasks for this job 21:00:22.799 INFO TaskSchedulerImpl - Killing all running tasks in stage 1196: Stage finished 21:00:22.799 INFO DAGScheduler - Job 543 finished: collect at SparkUtils.java:205, took 0.059716 s 21:00:22.804 INFO MemoryStore - Block broadcast_1927 stored as values in memory (estimated size 7.8 KiB, free 1916.4 MiB) 21:00:22.804 INFO MemoryStore - Block broadcast_1927_piece0 stored as bytes in memory (estimated size 540.0 B, free 1916.4 MiB) 21:00:22.804 INFO BlockManagerInfo - Added broadcast_1927_piece0 in memory on localhost:43253 (size: 540.0 B, free: 1919.1 MiB) 21:00:22.804 INFO SparkContext - Created broadcast 1927 from broadcast at MarkDuplicatesSparkUtils.java:126 21:00:22.804 INFO MemoryStore - Block broadcast_1928 stored as values in memory (estimated size 2.6 KiB, free 1916.4 MiB) 21:00:22.805 INFO MemoryStore - Block broadcast_1928_piece0 stored as bytes in memory (estimated size 209.0 B, free 1916.4 MiB) 21:00:22.805 INFO BlockManagerInfo - Added broadcast_1928_piece0 in memory on localhost:43253 (size: 209.0 B, free: 1919.1 MiB) 21:00:22.805 INFO SparkContext - Created broadcast 1928 from broadcast at MarkDuplicatesSparkUtils.java:127 21:00:22.819 INFO SparkContext - Starting job: collectAsMap at MarkDuplicatesSparkUtils.java:542 21:00:22.819 INFO DAGScheduler - Registering RDD 5345 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) as input to shuffle 297 21:00:22.819 INFO DAGScheduler - Registering RDD 5349 (mapToPair at MarkDuplicatesSpark.java:217) as input to shuffle 296 21:00:22.819 INFO DAGScheduler - Registering RDD 5353 (mapToPair at MarkDuplicatesSparkUtils.java:497) as input to shuffle 295 21:00:22.819 INFO DAGScheduler - Got job 544 (collectAsMap at MarkDuplicatesSparkUtils.java:542) with 1 output partitions 21:00:22.819 INFO DAGScheduler - Final stage: ResultStage 1201 (collectAsMap at MarkDuplicatesSparkUtils.java:542) 21:00:22.819 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1200) 21:00:22.820 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1200) 21:00:22.820 INFO DAGScheduler - Submitting ShuffleMapStage 1198 (MapPartitionsRDD[5345] at flatMapToPair at MarkDuplicatesSparkUtils.java:130), which has no missing parents 21:00:22.821 INFO MemoryStore - Block broadcast_1929 stored as values in memory (estimated size 57.6 KiB, free 1916.4 MiB) 21:00:22.822 INFO MemoryStore - Block broadcast_1929_piece0 stored as bytes in memory (estimated size 19.8 KiB, free 1916.3 MiB) 21:00:22.822 INFO BlockManagerInfo - Added broadcast_1929_piece0 in memory on localhost:43253 (size: 19.8 KiB, free: 1919.1 MiB) 21:00:22.822 INFO SparkContext - Created broadcast 1929 from broadcast at DAGScheduler.scala:1580 21:00:22.822 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1198 (MapPartitionsRDD[5345] at flatMapToPair at MarkDuplicatesSparkUtils.java:130) (first 15 tasks are for partitions Vector(0)) 21:00:22.822 INFO TaskSchedulerImpl - Adding task set 1198.0 with 1 tasks resource profile 0 21:00:22.823 INFO TaskSetManager - Starting task 0.0 in stage 1198.0 (TID 1029) (localhost, executor driver, partition 0, NODE_LOCAL, 9813 bytes) 21:00:22.823 INFO Executor - Running task 0.0 in stage 1198.0 (TID 1029) 21:00:22.825 INFO ShuffleBlockFetcherIterator - Getting 1 (16.3 KiB) non-empty blocks including 1 (16.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:22.825 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:22.829 INFO Executor - Finished task 0.0 in stage 1198.0 (TID 1029). 1922 bytes result sent to driver 21:00:22.830 INFO TaskSetManager - Finished task 0.0 in stage 1198.0 (TID 1029) in 7 ms on localhost (executor driver) (1/1) 21:00:22.830 INFO TaskSchedulerImpl - Removed TaskSet 1198.0, whose tasks have all completed, from pool 21:00:22.830 INFO DAGScheduler - ShuffleMapStage 1198 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) finished in 0.010 s 21:00:22.830 INFO DAGScheduler - looking for newly runnable stages 21:00:22.830 INFO DAGScheduler - running: HashSet() 21:00:22.830 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1200, ResultStage 1201, ShuffleMapStage 1199) 21:00:22.830 INFO DAGScheduler - failed: HashSet() 21:00:22.830 INFO DAGScheduler - Submitting ShuffleMapStage 1199 (MapPartitionsRDD[5349] at mapToPair at MarkDuplicatesSpark.java:217), which has no missing parents 21:00:22.831 INFO MemoryStore - Block broadcast_1930 stored as values in memory (estimated size 61.7 KiB, free 1916.3 MiB) 21:00:22.832 INFO MemoryStore - Block broadcast_1930_piece0 stored as bytes in memory (estimated size 22.0 KiB, free 1916.3 MiB) 21:00:22.832 INFO BlockManagerInfo - Added broadcast_1930_piece0 in memory on localhost:43253 (size: 22.0 KiB, free: 1919.1 MiB) 21:00:22.832 INFO SparkContext - Created broadcast 1930 from broadcast at DAGScheduler.scala:1580 21:00:22.832 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1199 (MapPartitionsRDD[5349] at mapToPair at MarkDuplicatesSpark.java:217) (first 15 tasks are for partitions Vector(0)) 21:00:22.832 INFO TaskSchedulerImpl - Adding task set 1199.0 with 1 tasks resource profile 0 21:00:22.832 INFO TaskSetManager - Starting task 0.0 in stage 1199.0 (TID 1030) (localhost, executor driver, partition 0, NODE_LOCAL, 9288 bytes) 21:00:22.833 INFO Executor - Running task 0.0 in stage 1199.0 (TID 1030) 21:00:22.834 INFO ShuffleBlockFetcherIterator - Getting 1 (2.0 KiB) non-empty blocks including 1 (2.0 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:22.834 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:22.839 INFO Executor - Finished task 0.0 in stage 1199.0 (TID 1030). 1922 bytes result sent to driver 21:00:22.839 INFO TaskSetManager - Finished task 0.0 in stage 1199.0 (TID 1030) in 7 ms on localhost (executor driver) (1/1) 21:00:22.839 INFO TaskSchedulerImpl - Removed TaskSet 1199.0, whose tasks have all completed, from pool 21:00:22.839 INFO DAGScheduler - ShuffleMapStage 1199 (mapToPair at MarkDuplicatesSpark.java:217) finished in 0.009 s 21:00:22.839 INFO DAGScheduler - looking for newly runnable stages 21:00:22.839 INFO DAGScheduler - running: HashSet() 21:00:22.839 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1200, ResultStage 1201) 21:00:22.839 INFO DAGScheduler - failed: HashSet() 21:00:22.839 INFO DAGScheduler - Submitting ShuffleMapStage 1200 (MapPartitionsRDD[5353] at mapToPair at MarkDuplicatesSparkUtils.java:497), which has no missing parents 21:00:22.840 INFO MemoryStore - Block broadcast_1931 stored as values in memory (estimated size 52.4 KiB, free 1916.2 MiB) 21:00:22.841 INFO MemoryStore - Block broadcast_1931_piece0 stored as bytes in memory (estimated size 17.3 KiB, free 1916.2 MiB) 21:00:22.841 INFO BlockManagerInfo - Added broadcast_1931_piece0 in memory on localhost:43253 (size: 17.3 KiB, free: 1919.1 MiB) 21:00:22.841 INFO SparkContext - Created broadcast 1931 from broadcast at DAGScheduler.scala:1580 21:00:22.841 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1200 (MapPartitionsRDD[5353] at mapToPair at MarkDuplicatesSparkUtils.java:497) (first 15 tasks are for partitions Vector(0)) 21:00:22.841 INFO TaskSchedulerImpl - Adding task set 1200.0 with 1 tasks resource profile 0 21:00:22.842 INFO TaskSetManager - Starting task 0.0 in stage 1200.0 (TID 1031) (localhost, executor driver, partition 0, NODE_LOCAL, 9871 bytes) 21:00:22.842 INFO Executor - Running task 0.0 in stage 1200.0 (TID 1031) 21:00:22.843 INFO ShuffleBlockFetcherIterator - Getting 1 (16.3 KiB) non-empty blocks including 1 (16.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:22.843 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:22.845 INFO ShuffleBlockFetcherIterator - Getting 1 (2.7 KiB) non-empty blocks including 1 (2.7 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:22.846 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:22.847 INFO Executor - Finished task 0.0 in stage 1200.0 (TID 1031). 1922 bytes result sent to driver 21:00:22.847 INFO TaskSetManager - Finished task 0.0 in stage 1200.0 (TID 1031) in 6 ms on localhost (executor driver) (1/1) 21:00:22.848 INFO TaskSchedulerImpl - Removed TaskSet 1200.0, whose tasks have all completed, from pool 21:00:22.848 INFO DAGScheduler - ShuffleMapStage 1200 (mapToPair at MarkDuplicatesSparkUtils.java:497) finished in 0.009 s 21:00:22.848 INFO DAGScheduler - looking for newly runnable stages 21:00:22.848 INFO DAGScheduler - running: HashSet() 21:00:22.848 INFO DAGScheduler - waiting: HashSet(ResultStage 1201) 21:00:22.848 INFO DAGScheduler - failed: HashSet() 21:00:22.848 INFO DAGScheduler - Submitting ResultStage 1201 (MapPartitionsRDD[5355] at mapValues at MarkDuplicatesSparkUtils.java:519), which has no missing parents 21:00:22.849 INFO MemoryStore - Block broadcast_1932 stored as values in memory (estimated size 53.6 KiB, free 1916.1 MiB) 21:00:22.849 INFO MemoryStore - Block broadcast_1932_piece0 stored as bytes in memory (estimated size 18.0 KiB, free 1916.1 MiB) 21:00:22.849 INFO BlockManagerInfo - Added broadcast_1932_piece0 in memory on localhost:43253 (size: 18.0 KiB, free: 1919.1 MiB) 21:00:22.850 INFO SparkContext - Created broadcast 1932 from broadcast at DAGScheduler.scala:1580 21:00:22.850 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1201 (MapPartitionsRDD[5355] at mapValues at MarkDuplicatesSparkUtils.java:519) (first 15 tasks are for partitions Vector(0)) 21:00:22.850 INFO TaskSchedulerImpl - Adding task set 1201.0 with 1 tasks resource profile 0 21:00:22.850 INFO TaskSetManager - Starting task 0.0 in stage 1201.0 (TID 1032) (localhost, executor driver, partition 0, NODE_LOCAL, 9299 bytes) 21:00:22.850 INFO Executor - Running task 0.0 in stage 1201.0 (TID 1032) 21:00:22.852 INFO ShuffleBlockFetcherIterator - Getting 1 (251.0 B) non-empty blocks including 1 (251.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:22.852 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:22.853 INFO Executor - Finished task 0.0 in stage 1201.0 (TID 1032). 2371 bytes result sent to driver 21:00:22.853 INFO TaskSetManager - Finished task 0.0 in stage 1201.0 (TID 1032) in 3 ms on localhost (executor driver) (1/1) 21:00:22.853 INFO TaskSchedulerImpl - Removed TaskSet 1201.0, whose tasks have all completed, from pool 21:00:22.853 INFO DAGScheduler - ResultStage 1201 (collectAsMap at MarkDuplicatesSparkUtils.java:542) finished in 0.005 s 21:00:22.853 INFO DAGScheduler - Job 544 is finished. Cancelling potential speculative or zombie tasks for this job 21:00:22.853 INFO TaskSchedulerImpl - Killing all running tasks in stage 1201: Stage finished 21:00:22.853 INFO DAGScheduler - Job 544 finished: collectAsMap at MarkDuplicatesSparkUtils.java:542, took 0.034717 s 21:00:22.864 INFO MemoryStore - Block broadcast_1933 stored as values in memory (estimated size 179.4 KiB, free 1915.9 MiB) 21:00:22.865 INFO MemoryStore - Block broadcast_1933_piece0 stored as bytes in memory (estimated size 7.9 KiB, free 1915.9 MiB) 21:00:22.865 INFO BlockManagerInfo - Added broadcast_1933_piece0 in memory on localhost:43253 (size: 7.9 KiB, free: 1919.1 MiB) 21:00:22.865 INFO SparkContext - Created broadcast 1933 from broadcast at ReadsSparkSink.java:133 21:00:22.869 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 21:00:22.869 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 21:00:22.869 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 21:00:22.880 INFO SparkContext - Starting job: runJob at SparkHadoopWriter.scala:83 21:00:22.880 INFO DAGScheduler - Registering RDD 5357 (mapToPair at SparkUtils.java:161) as input to shuffle 298 21:00:22.880 INFO DAGScheduler - Got job 545 (runJob at SparkHadoopWriter.scala:83) with 1 output partitions 21:00:22.880 INFO DAGScheduler - Final stage: ResultStage 1206 (runJob at SparkHadoopWriter.scala:83) 21:00:22.880 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1205) 21:00:22.880 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1205) 21:00:22.881 INFO DAGScheduler - Submitting ShuffleMapStage 1205 (MapPartitionsRDD[5357] at mapToPair at SparkUtils.java:161), which has no missing parents 21:00:22.881 INFO MemoryStore - Block broadcast_1934 stored as values in memory (estimated size 51.5 KiB, free 1915.9 MiB) 21:00:22.882 INFO MemoryStore - Block broadcast_1934_piece0 stored as bytes in memory (estimated size 16.8 KiB, free 1915.9 MiB) 21:00:22.882 INFO BlockManagerInfo - Added broadcast_1934_piece0 in memory on localhost:43253 (size: 16.8 KiB, free: 1919.0 MiB) 21:00:22.882 INFO SparkContext - Created broadcast 1934 from broadcast at DAGScheduler.scala:1580 21:00:22.882 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1205 (MapPartitionsRDD[5357] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 21:00:22.882 INFO TaskSchedulerImpl - Adding task set 1205.0 with 1 tasks resource profile 0 21:00:22.883 INFO TaskSetManager - Starting task 0.0 in stage 1205.0 (TID 1033) (localhost, executor driver, partition 0, NODE_LOCAL, 9871 bytes) 21:00:22.883 INFO Executor - Running task 0.0 in stage 1205.0 (TID 1033) 21:00:22.885 INFO ShuffleBlockFetcherIterator - Getting 1 (16.3 KiB) non-empty blocks including 1 (16.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:22.885 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:22.887 INFO ShuffleBlockFetcherIterator - Getting 1 (2.7 KiB) non-empty blocks including 1 (2.7 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:22.888 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:22.889 INFO Executor - Finished task 0.0 in stage 1205.0 (TID 1033). 1922 bytes result sent to driver 21:00:22.889 INFO TaskSetManager - Finished task 0.0 in stage 1205.0 (TID 1033) in 6 ms on localhost (executor driver) (1/1) 21:00:22.890 INFO TaskSchedulerImpl - Removed TaskSet 1205.0, whose tasks have all completed, from pool 21:00:22.890 INFO DAGScheduler - ShuffleMapStage 1205 (mapToPair at SparkUtils.java:161) finished in 0.009 s 21:00:22.890 INFO DAGScheduler - looking for newly runnable stages 21:00:22.890 INFO DAGScheduler - running: HashSet() 21:00:22.890 INFO DAGScheduler - waiting: HashSet(ResultStage 1206) 21:00:22.890 INFO DAGScheduler - failed: HashSet() 21:00:22.890 INFO DAGScheduler - Submitting ResultStage 1206 (MapPartitionsRDD[5363] at saveAsTextFile at SamSink.java:65), which has no missing parents 21:00:22.896 INFO MemoryStore - Block broadcast_1935 stored as values in memory (estimated size 196.3 KiB, free 1915.7 MiB) 21:00:22.897 INFO MemoryStore - Block broadcast_1935_piece0 stored as bytes in memory (estimated size 80.5 KiB, free 1915.6 MiB) 21:00:22.897 INFO BlockManagerInfo - Added broadcast_1935_piece0 in memory on localhost:43253 (size: 80.5 KiB, free: 1919.0 MiB) 21:00:22.898 INFO SparkContext - Created broadcast 1935 from broadcast at DAGScheduler.scala:1580 21:00:22.898 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1206 (MapPartitionsRDD[5363] at saveAsTextFile at SamSink.java:65) (first 15 tasks are for partitions Vector(0)) 21:00:22.898 INFO TaskSchedulerImpl - Adding task set 1206.0 with 1 tasks resource profile 0 21:00:22.898 INFO TaskSetManager - Starting task 0.0 in stage 1206.0 (TID 1034) (localhost, executor driver, partition 0, NODE_LOCAL, 9299 bytes) 21:00:22.898 INFO Executor - Running task 0.0 in stage 1206.0 (TID 1034) 21:00:22.902 INFO ShuffleBlockFetcherIterator - Getting 1 (14.8 KiB) non-empty blocks including 1 (14.8 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:22.902 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:22.905 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 21:00:22.905 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 21:00:22.905 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 21:00:22.911 INFO FileOutputCommitter - Saved output of task 'attempt_202506122100225990611740437115187_5363_m_000000_0' to file:/tmp/MarkDups11118196389712080783/MarkDups.sam.parts/_temporary/0/task_202506122100225990611740437115187_5363_m_000000 21:00:22.911 INFO SparkHadoopMapRedUtil - attempt_202506122100225990611740437115187_5363_m_000000_0: Committed. Elapsed time: 0 ms. 21:00:22.912 INFO Executor - Finished task 0.0 in stage 1206.0 (TID 1034). 1858 bytes result sent to driver 21:00:22.912 INFO TaskSetManager - Finished task 0.0 in stage 1206.0 (TID 1034) in 14 ms on localhost (executor driver) (1/1) 21:00:22.912 INFO TaskSchedulerImpl - Removed TaskSet 1206.0, whose tasks have all completed, from pool 21:00:22.912 INFO DAGScheduler - ResultStage 1206 (runJob at SparkHadoopWriter.scala:83) finished in 0.022 s 21:00:22.912 INFO DAGScheduler - Job 545 is finished. Cancelling potential speculative or zombie tasks for this job 21:00:22.912 INFO TaskSchedulerImpl - Killing all running tasks in stage 1206: Stage finished 21:00:22.912 INFO DAGScheduler - Job 545 finished: runJob at SparkHadoopWriter.scala:83, took 0.032425 s 21:00:22.912 INFO SparkHadoopWriter - Start to commit write Job job_202506122100225990611740437115187_5363. 21:00:22.915 INFO SparkHadoopWriter - Write Job job_202506122100225990611740437115187_5363 committed. Elapsed time: 2 ms. 21:00:22.919 INFO HadoopFileSystemWrapper - Concatenating 2 parts to /tmp/MarkDups11118196389712080783/MarkDups.sam 21:00:22.921 INFO HadoopFileSystemWrapper - Concatenating to /tmp/MarkDups11118196389712080783/MarkDups.sam done 21:00:22.922 INFO MarkDuplicatesSpark - Shutting down engine [June 12, 2025 at 9:00:22 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 21:00:22.937 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 21:00:22.937 INFO MarkDuplicatesSpark - The Genome Analysis Toolkit (GATK) v092bdf2-SNAPSHOT 21:00:22.937 INFO MarkDuplicatesSpark - For support and documentation go to https://software.broadinstitute.org/gatk/ 21:00:22.937 INFO MarkDuplicatesSpark - Executing as root@e24cea241624 on Linux v6.11.0-1015-azure amd64 21:00:22.937 INFO MarkDuplicatesSpark - Java runtime: OpenJDK 64-Bit Server VM v17.0.12+7-Ubuntu-1ubuntu222.04 21:00:22.937 INFO MarkDuplicatesSpark - Start Date/Time: June 12, 2025 at 9:00:22 PM GMT 21:00:22.937 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 21:00:22.938 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 21:00:22.938 INFO MarkDuplicatesSpark - HTSJDK Version: 4.2.0 21:00:22.938 INFO MarkDuplicatesSpark - Picard Version: 3.4.0 21:00:22.938 INFO MarkDuplicatesSpark - Built for Spark Version: 3.5.0 21:00:22.938 INFO MarkDuplicatesSpark - HTSJDK Defaults.COMPRESSION_LEVEL : 2 21:00:22.938 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 21:00:22.938 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 21:00:22.938 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 21:00:22.938 INFO MarkDuplicatesSpark - Deflater: IntelDeflater 21:00:22.938 INFO MarkDuplicatesSpark - Inflater: IntelInflater 21:00:22.938 INFO MarkDuplicatesSpark - GCS max retries/reopens: 20 21:00:22.938 INFO MarkDuplicatesSpark - Requester pays: disabled 21:00:22.938 INFO MarkDuplicatesSpark - Initializing engine 21:00:22.938 INFO MarkDuplicatesSpark - Done initializing engine 21:00:22.938 INFO MarkDuplicatesSpark - Spark verbosity set to INFO (see --spark-verbosity argument) 21:00:22.940 INFO MemoryStore - Block broadcast_1936 stored as values in memory (estimated size 305.5 KiB, free 1915.3 MiB) 21:00:22.951 INFO MemoryStore - Block broadcast_1936_piece0 stored as bytes in memory (estimated size 64.0 KiB, free 1915.2 MiB) 21:00:22.951 INFO BlockManagerInfo - Added broadcast_1936_piece0 in memory on localhost:43253 (size: 64.0 KiB, free: 1918.9 MiB) 21:00:22.951 INFO SparkContext - Created broadcast 1936 from newAPIHadoopFile at PathSplitSource.java:96 21:00:22.978 INFO MemoryStore - Block broadcast_1937 stored as values in memory (estimated size 305.5 KiB, free 1914.9 MiB) 21:00:22.985 INFO MemoryStore - Block broadcast_1937_piece0 stored as bytes in memory (estimated size 64.0 KiB, free 1914.9 MiB) 21:00:22.985 INFO BlockManagerInfo - Added broadcast_1937_piece0 in memory on localhost:43253 (size: 64.0 KiB, free: 1918.8 MiB) 21:00:22.985 INFO SparkContext - Created broadcast 1937 from newAPIHadoopFile at PathSplitSource.java:96 21:00:23.017 INFO SparkContext - Starting job: collect at SparkUtils.java:205 21:00:23.017 INFO FileInputFormat - Total input files to process : 1 21:00:23.017 INFO DAGScheduler - Registering RDD 5377 (mapToPair at SparkUtils.java:161) as input to shuffle 299 21:00:23.017 INFO DAGScheduler - Got job 546 (collect at SparkUtils.java:205) with 1 output partitions 21:00:23.017 INFO DAGScheduler - Final stage: ResultStage 1208 (collect at SparkUtils.java:205) 21:00:23.018 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1207) 21:00:23.018 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1207) 21:00:23.018 INFO DAGScheduler - Submitting ShuffleMapStage 1207 (MapPartitionsRDD[5377] at mapToPair at SparkUtils.java:161), which has no missing parents 21:00:23.035 INFO MemoryStore - Block broadcast_1938 stored as values in memory (estimated size 456.1 KiB, free 1914.4 MiB) 21:00:23.038 INFO MemoryStore - Block broadcast_1938_piece0 stored as bytes in memory (estimated size 200.0 KiB, free 1914.2 MiB) 21:00:23.038 INFO BlockManagerInfo - Added broadcast_1938_piece0 in memory on localhost:43253 (size: 200.0 KiB, free: 1918.7 MiB) 21:00:23.038 INFO SparkContext - Created broadcast 1938 from broadcast at DAGScheduler.scala:1580 21:00:23.039 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1207 (MapPartitionsRDD[5377] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 21:00:23.039 INFO TaskSchedulerImpl - Adding task set 1207.0 with 1 tasks resource profile 0 21:00:23.039 INFO TaskSetManager - Starting task 0.0 in stage 1207.0 (TID 1035) (localhost, executor driver, partition 0, PROCESS_LOCAL, 9653 bytes) 21:00:23.039 INFO Executor - Running task 0.0 in stage 1207.0 (TID 1035) 21:00:23.073 INFO NewHadoopRDD - Input split: file:/gatkCloneMountPoint/src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/optical_dupes.bam:0+678 21:00:23.077 INFO Executor - Finished task 0.0 in stage 1207.0 (TID 1035). 1148 bytes result sent to driver 21:00:23.077 INFO TaskSetManager - Finished task 0.0 in stage 1207.0 (TID 1035) in 38 ms on localhost (executor driver) (1/1) 21:00:23.077 INFO TaskSchedulerImpl - Removed TaskSet 1207.0, whose tasks have all completed, from pool 21:00:23.077 INFO DAGScheduler - ShuffleMapStage 1207 (mapToPair at SparkUtils.java:161) finished in 0.059 s 21:00:23.077 INFO DAGScheduler - looking for newly runnable stages 21:00:23.077 INFO DAGScheduler - running: HashSet() 21:00:23.077 INFO DAGScheduler - waiting: HashSet(ResultStage 1208) 21:00:23.077 INFO DAGScheduler - failed: HashSet() 21:00:23.077 INFO DAGScheduler - Submitting ResultStage 1208 (MapPartitionsRDD[5380] at mapPartitions at SparkUtils.java:188), which has no missing parents 21:00:23.078 INFO MemoryStore - Block broadcast_1939 stored as values in memory (estimated size 7.4 KiB, free 1914.2 MiB) 21:00:23.078 INFO MemoryStore - Block broadcast_1939_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1914.2 MiB) 21:00:23.078 INFO BlockManagerInfo - Added broadcast_1939_piece0 in memory on localhost:43253 (size: 4.1 KiB, free: 1918.6 MiB) 21:00:23.079 INFO SparkContext - Created broadcast 1939 from broadcast at DAGScheduler.scala:1580 21:00:23.079 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1208 (MapPartitionsRDD[5380] at mapPartitions at SparkUtils.java:188) (first 15 tasks are for partitions Vector(0)) 21:00:23.079 INFO TaskSchedulerImpl - Adding task set 1208.0 with 1 tasks resource profile 0 21:00:23.079 INFO TaskSetManager - Starting task 0.0 in stage 1208.0 (TID 1036) (localhost, executor driver, partition 0, NODE_LOCAL, 9299 bytes) 21:00:23.079 INFO Executor - Running task 0.0 in stage 1208.0 (TID 1036) 21:00:23.080 INFO ShuffleBlockFetcherIterator - Getting 1 (490.0 B) non-empty blocks including 1 (490.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.080 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.082 INFO Executor - Finished task 0.0 in stage 1208.0 (TID 1036). 2269 bytes result sent to driver 21:00:23.082 INFO TaskSetManager - Finished task 0.0 in stage 1208.0 (TID 1036) in 3 ms on localhost (executor driver) (1/1) 21:00:23.083 INFO TaskSchedulerImpl - Removed TaskSet 1208.0, whose tasks have all completed, from pool 21:00:23.083 INFO DAGScheduler - ResultStage 1208 (collect at SparkUtils.java:205) finished in 0.005 s 21:00:23.083 INFO DAGScheduler - Job 546 is finished. Cancelling potential speculative or zombie tasks for this job 21:00:23.083 INFO TaskSchedulerImpl - Killing all running tasks in stage 1208: Stage finished 21:00:23.083 INFO DAGScheduler - Job 546 finished: collect at SparkUtils.java:205, took 0.066164 s 21:00:23.087 INFO MemoryStore - Block broadcast_1940 stored as values in memory (estimated size 136.0 B, free 1914.2 MiB) 21:00:23.087 INFO MemoryStore - Block broadcast_1940_piece0 stored as bytes in memory (estimated size 24.0 B, free 1914.2 MiB) 21:00:23.087 INFO BlockManagerInfo - Added broadcast_1940_piece0 in memory on localhost:43253 (size: 24.0 B, free: 1918.6 MiB) 21:00:23.087 INFO SparkContext - Created broadcast 1940 from broadcast at MarkDuplicatesSparkUtils.java:126 21:00:23.087 INFO MemoryStore - Block broadcast_1941 stored as values in memory (estimated size 136.0 B, free 1914.2 MiB) 21:00:23.088 INFO MemoryStore - Block broadcast_1941_piece0 stored as bytes in memory (estimated size 21.0 B, free 1914.2 MiB) 21:00:23.088 INFO BlockManagerInfo - Added broadcast_1941_piece0 in memory on localhost:43253 (size: 21.0 B, free: 1918.6 MiB) 21:00:23.088 INFO SparkContext - Created broadcast 1941 from broadcast at MarkDuplicatesSparkUtils.java:127 21:00:23.099 INFO SparkContext - Starting job: collectAsMap at MarkDuplicatesSparkUtils.java:542 21:00:23.100 INFO DAGScheduler - Registering RDD 5389 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) as input to shuffle 302 21:00:23.100 INFO DAGScheduler - Registering RDD 5393 (mapToPair at MarkDuplicatesSpark.java:217) as input to shuffle 301 21:00:23.100 INFO DAGScheduler - Registering RDD 5397 (mapToPair at MarkDuplicatesSparkUtils.java:497) as input to shuffle 300 21:00:23.100 INFO DAGScheduler - Got job 547 (collectAsMap at MarkDuplicatesSparkUtils.java:542) with 1 output partitions 21:00:23.100 INFO DAGScheduler - Final stage: ResultStage 1213 (collectAsMap at MarkDuplicatesSparkUtils.java:542) 21:00:23.100 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1212) 21:00:23.100 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1212) 21:00:23.100 INFO DAGScheduler - Submitting ShuffleMapStage 1210 (MapPartitionsRDD[5389] at flatMapToPair at MarkDuplicatesSparkUtils.java:130), which has no missing parents 21:00:23.101 INFO MemoryStore - Block broadcast_1942 stored as values in memory (estimated size 21.1 KiB, free 1914.2 MiB) 21:00:23.101 INFO MemoryStore - Block broadcast_1942_piece0 stored as bytes in memory (estimated size 10.9 KiB, free 1914.2 MiB) 21:00:23.101 INFO BlockManagerInfo - Added broadcast_1942_piece0 in memory on localhost:43253 (size: 10.9 KiB, free: 1918.6 MiB) 21:00:23.101 INFO SparkContext - Created broadcast 1942 from broadcast at DAGScheduler.scala:1580 21:00:23.102 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1210 (MapPartitionsRDD[5389] at flatMapToPair at MarkDuplicatesSparkUtils.java:130) (first 15 tasks are for partitions Vector(0)) 21:00:23.102 INFO TaskSchedulerImpl - Adding task set 1210.0 with 1 tasks resource profile 0 21:00:23.102 INFO TaskSetManager - Starting task 0.0 in stage 1210.0 (TID 1037) (localhost, executor driver, partition 0, NODE_LOCAL, 9813 bytes) 21:00:23.102 INFO Executor - Running task 0.0 in stage 1210.0 (TID 1037) 21:00:23.104 INFO ShuffleBlockFetcherIterator - Getting 1 (490.0 B) non-empty blocks including 1 (490.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.104 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.107 INFO Executor - Finished task 0.0 in stage 1210.0 (TID 1037). 1922 bytes result sent to driver 21:00:23.107 INFO TaskSetManager - Finished task 0.0 in stage 1210.0 (TID 1037) in 5 ms on localhost (executor driver) (1/1) 21:00:23.107 INFO TaskSchedulerImpl - Removed TaskSet 1210.0, whose tasks have all completed, from pool 21:00:23.107 INFO DAGScheduler - ShuffleMapStage 1210 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) finished in 0.007 s 21:00:23.107 INFO DAGScheduler - looking for newly runnable stages 21:00:23.107 INFO DAGScheduler - running: HashSet() 21:00:23.107 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1211, ShuffleMapStage 1212, ResultStage 1213) 21:00:23.107 INFO DAGScheduler - failed: HashSet() 21:00:23.108 INFO DAGScheduler - Submitting ShuffleMapStage 1211 (MapPartitionsRDD[5393] at mapToPair at MarkDuplicatesSpark.java:217), which has no missing parents 21:00:23.108 INFO MemoryStore - Block broadcast_1943 stored as values in memory (estimated size 25.2 KiB, free 1914.2 MiB) 21:00:23.109 INFO MemoryStore - Block broadcast_1943_piece0 stored as bytes in memory (estimated size 13.1 KiB, free 1914.2 MiB) 21:00:23.109 INFO BlockManagerInfo - Added broadcast_1943_piece0 in memory on localhost:43253 (size: 13.1 KiB, free: 1918.6 MiB) 21:00:23.109 INFO SparkContext - Created broadcast 1943 from broadcast at DAGScheduler.scala:1580 21:00:23.109 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1211 (MapPartitionsRDD[5393] at mapToPair at MarkDuplicatesSpark.java:217) (first 15 tasks are for partitions Vector(0)) 21:00:23.109 INFO TaskSchedulerImpl - Adding task set 1211.0 with 1 tasks resource profile 0 21:00:23.109 INFO TaskSetManager - Starting task 0.0 in stage 1211.0 (TID 1038) (localhost, executor driver, partition 0, NODE_LOCAL, 9288 bytes) 21:00:23.109 INFO Executor - Running task 0.0 in stage 1211.0 (TID 1038) 21:00:23.111 INFO ShuffleBlockFetcherIterator - Getting 1 (106.0 B) non-empty blocks including 1 (106.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.111 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.115 INFO Executor - Finished task 0.0 in stage 1211.0 (TID 1038). 1922 bytes result sent to driver 21:00:23.115 INFO TaskSetManager - Finished task 0.0 in stage 1211.0 (TID 1038) in 6 ms on localhost (executor driver) (1/1) 21:00:23.115 INFO TaskSchedulerImpl - Removed TaskSet 1211.0, whose tasks have all completed, from pool 21:00:23.115 INFO DAGScheduler - ShuffleMapStage 1211 (mapToPair at MarkDuplicatesSpark.java:217) finished in 0.007 s 21:00:23.115 INFO DAGScheduler - looking for newly runnable stages 21:00:23.115 INFO DAGScheduler - running: HashSet() 21:00:23.115 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1212, ResultStage 1213) 21:00:23.115 INFO DAGScheduler - failed: HashSet() 21:00:23.115 INFO DAGScheduler - Submitting ShuffleMapStage 1212 (MapPartitionsRDD[5397] at mapToPair at MarkDuplicatesSparkUtils.java:497), which has no missing parents 21:00:23.116 INFO MemoryStore - Block broadcast_1944 stored as values in memory (estimated size 15.9 KiB, free 1914.1 MiB) 21:00:23.116 INFO MemoryStore - Block broadcast_1944_piece0 stored as bytes in memory (estimated size 8.3 KiB, free 1914.1 MiB) 21:00:23.117 INFO BlockManagerInfo - Added broadcast_1944_piece0 in memory on localhost:43253 (size: 8.3 KiB, free: 1918.6 MiB) 21:00:23.117 INFO SparkContext - Created broadcast 1944 from broadcast at DAGScheduler.scala:1580 21:00:23.117 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1212 (MapPartitionsRDD[5397] at mapToPair at MarkDuplicatesSparkUtils.java:497) (first 15 tasks are for partitions Vector(0)) 21:00:23.117 INFO TaskSchedulerImpl - Adding task set 1212.0 with 1 tasks resource profile 0 21:00:23.117 INFO TaskSetManager - Starting task 0.0 in stage 1212.0 (TID 1039) (localhost, executor driver, partition 0, NODE_LOCAL, 9871 bytes) 21:00:23.117 INFO Executor - Running task 0.0 in stage 1212.0 (TID 1039) 21:00:23.119 INFO ShuffleBlockFetcherIterator - Getting 1 (490.0 B) non-empty blocks including 1 (490.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.119 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.121 INFO ShuffleBlockFetcherIterator - Getting 1 (60.0 B) non-empty blocks including 1 (60.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.121 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.122 INFO Executor - Finished task 0.0 in stage 1212.0 (TID 1039). 1922 bytes result sent to driver 21:00:23.122 INFO TaskSetManager - Finished task 0.0 in stage 1212.0 (TID 1039) in 5 ms on localhost (executor driver) (1/1) 21:00:23.122 INFO TaskSchedulerImpl - Removed TaskSet 1212.0, whose tasks have all completed, from pool 21:00:23.122 INFO DAGScheduler - ShuffleMapStage 1212 (mapToPair at MarkDuplicatesSparkUtils.java:497) finished in 0.006 s 21:00:23.122 INFO DAGScheduler - looking for newly runnable stages 21:00:23.122 INFO DAGScheduler - running: HashSet() 21:00:23.122 INFO DAGScheduler - waiting: HashSet(ResultStage 1213) 21:00:23.122 INFO DAGScheduler - failed: HashSet() 21:00:23.122 INFO DAGScheduler - Submitting ResultStage 1213 (MapPartitionsRDD[5399] at mapValues at MarkDuplicatesSparkUtils.java:519), which has no missing parents 21:00:23.123 INFO MemoryStore - Block broadcast_1945 stored as values in memory (estimated size 17.1 KiB, free 1914.1 MiB) 21:00:23.123 INFO MemoryStore - Block broadcast_1945_piece0 stored as bytes in memory (estimated size 9.0 KiB, free 1914.1 MiB) 21:00:23.124 INFO BlockManagerInfo - Added broadcast_1945_piece0 in memory on localhost:43253 (size: 9.0 KiB, free: 1918.6 MiB) 21:00:23.124 INFO SparkContext - Created broadcast 1945 from broadcast at DAGScheduler.scala:1580 21:00:23.124 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1213 (MapPartitionsRDD[5399] at mapValues at MarkDuplicatesSparkUtils.java:519) (first 15 tasks are for partitions Vector(0)) 21:00:23.124 INFO TaskSchedulerImpl - Adding task set 1213.0 with 1 tasks resource profile 0 21:00:23.124 INFO TaskSetManager - Starting task 0.0 in stage 1213.0 (TID 1040) (localhost, executor driver, partition 0, NODE_LOCAL, 9299 bytes) 21:00:23.124 INFO Executor - Running task 0.0 in stage 1213.0 (TID 1040) 21:00:23.126 INFO ShuffleBlockFetcherIterator - Getting 1 (129.0 B) non-empty blocks including 1 (129.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.126 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.127 INFO Executor - Finished task 0.0 in stage 1213.0 (TID 1040). 1930 bytes result sent to driver 21:00:23.127 INFO TaskSetManager - Finished task 0.0 in stage 1213.0 (TID 1040) in 3 ms on localhost (executor driver) (1/1) 21:00:23.127 INFO TaskSchedulerImpl - Removed TaskSet 1213.0, whose tasks have all completed, from pool 21:00:23.127 INFO DAGScheduler - ResultStage 1213 (collectAsMap at MarkDuplicatesSparkUtils.java:542) finished in 0.004 s 21:00:23.128 INFO DAGScheduler - Job 547 is finished. Cancelling potential speculative or zombie tasks for this job 21:00:23.128 INFO TaskSchedulerImpl - Killing all running tasks in stage 1213: Stage finished 21:00:23.128 INFO DAGScheduler - Job 547 finished: collectAsMap at MarkDuplicatesSparkUtils.java:542, took 0.028393 s 21:00:23.137 INFO MemoryStore - Block broadcast_1946 stored as values in memory (estimated size 5.6 KiB, free 1914.1 MiB) 21:00:23.137 INFO MemoryStore - Block broadcast_1946_piece0 stored as bytes in memory (estimated size 500.0 B, free 1914.1 MiB) 21:00:23.137 INFO BlockManagerInfo - Added broadcast_1946_piece0 in memory on localhost:43253 (size: 500.0 B, free: 1918.6 MiB) 21:00:23.138 INFO SparkContext - Created broadcast 1946 from broadcast at ReadsSparkSink.java:133 21:00:23.141 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 21:00:23.141 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 21:00:23.141 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 21:00:23.151 INFO SparkContext - Starting job: runJob at SparkHadoopWriter.scala:83 21:00:23.152 INFO DAGScheduler - Registering RDD 5401 (mapToPair at SparkUtils.java:161) as input to shuffle 303 21:00:23.152 INFO DAGScheduler - Got job 548 (runJob at SparkHadoopWriter.scala:83) with 1 output partitions 21:00:23.152 INFO DAGScheduler - Final stage: ResultStage 1218 (runJob at SparkHadoopWriter.scala:83) 21:00:23.152 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1217) 21:00:23.152 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1217) 21:00:23.152 INFO DAGScheduler - Submitting ShuffleMapStage 1217 (MapPartitionsRDD[5401] at mapToPair at SparkUtils.java:161), which has no missing parents 21:00:23.153 INFO MemoryStore - Block broadcast_1947 stored as values in memory (estimated size 15.0 KiB, free 1914.1 MiB) 21:00:23.153 INFO MemoryStore - Block broadcast_1947_piece0 stored as bytes in memory (estimated size 7.8 KiB, free 1914.1 MiB) 21:00:23.153 INFO BlockManagerInfo - Added broadcast_1947_piece0 in memory on localhost:43253 (size: 7.8 KiB, free: 1918.6 MiB) 21:00:23.153 INFO SparkContext - Created broadcast 1947 from broadcast at DAGScheduler.scala:1580 21:00:23.154 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1217 (MapPartitionsRDD[5401] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 21:00:23.154 INFO TaskSchedulerImpl - Adding task set 1217.0 with 1 tasks resource profile 0 21:00:23.154 INFO TaskSetManager - Starting task 0.0 in stage 1217.0 (TID 1041) (localhost, executor driver, partition 0, NODE_LOCAL, 9871 bytes) 21:00:23.154 INFO Executor - Running task 0.0 in stage 1217.0 (TID 1041) 21:00:23.156 INFO ShuffleBlockFetcherIterator - Getting 1 (490.0 B) non-empty blocks including 1 (490.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.156 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.157 INFO ShuffleBlockFetcherIterator - Getting 1 (60.0 B) non-empty blocks including 1 (60.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.157 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.158 INFO Executor - Finished task 0.0 in stage 1217.0 (TID 1041). 1879 bytes result sent to driver 21:00:23.158 INFO TaskSetManager - Finished task 0.0 in stage 1217.0 (TID 1041) in 4 ms on localhost (executor driver) (1/1) 21:00:23.159 INFO TaskSchedulerImpl - Removed TaskSet 1217.0, whose tasks have all completed, from pool 21:00:23.159 INFO DAGScheduler - ShuffleMapStage 1217 (mapToPair at SparkUtils.java:161) finished in 0.007 s 21:00:23.159 INFO DAGScheduler - looking for newly runnable stages 21:00:23.159 INFO DAGScheduler - running: HashSet() 21:00:23.159 INFO DAGScheduler - waiting: HashSet(ResultStage 1218) 21:00:23.159 INFO DAGScheduler - failed: HashSet() 21:00:23.159 INFO DAGScheduler - Submitting ResultStage 1218 (MapPartitionsRDD[5407] at saveAsTextFile at SamSink.java:65), which has no missing parents 21:00:23.165 INFO MemoryStore - Block broadcast_1948 stored as values in memory (estimated size 159.8 KiB, free 1913.9 MiB) 21:00:23.166 INFO MemoryStore - Block broadcast_1948_piece0 stored as bytes in memory (estimated size 71.6 KiB, free 1913.9 MiB) 21:00:23.166 INFO BlockManagerInfo - Added broadcast_1948_piece0 in memory on localhost:43253 (size: 71.6 KiB, free: 1918.5 MiB) 21:00:23.166 INFO SparkContext - Created broadcast 1948 from broadcast at DAGScheduler.scala:1580 21:00:23.166 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1218 (MapPartitionsRDD[5407] at saveAsTextFile at SamSink.java:65) (first 15 tasks are for partitions Vector(0)) 21:00:23.166 INFO TaskSchedulerImpl - Adding task set 1218.0 with 1 tasks resource profile 0 21:00:23.167 INFO TaskSetManager - Starting task 0.0 in stage 1218.0 (TID 1042) (localhost, executor driver, partition 0, NODE_LOCAL, 9299 bytes) 21:00:23.167 INFO Executor - Running task 0.0 in stage 1218.0 (TID 1042) 21:00:23.170 INFO ShuffleBlockFetcherIterator - Getting 1 (490.0 B) non-empty blocks including 1 (490.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.170 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.171 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 21:00:23.171 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 21:00:23.171 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 21:00:23.177 INFO FileOutputCommitter - Saved output of task 'attempt_202506122100238601972803909013312_5407_m_000000_0' to file:/tmp/MarkDups7533924236431831406/MarkDups.sam.parts/_temporary/0/task_202506122100238601972803909013312_5407_m_000000 21:00:23.177 INFO SparkHadoopMapRedUtil - attempt_202506122100238601972803909013312_5407_m_000000_0: Committed. Elapsed time: 0 ms. 21:00:23.177 INFO Executor - Finished task 0.0 in stage 1218.0 (TID 1042). 1858 bytes result sent to driver 21:00:23.177 INFO TaskSetManager - Finished task 0.0 in stage 1218.0 (TID 1042) in 11 ms on localhost (executor driver) (1/1) 21:00:23.177 INFO TaskSchedulerImpl - Removed TaskSet 1218.0, whose tasks have all completed, from pool 21:00:23.177 INFO DAGScheduler - ResultStage 1218 (runJob at SparkHadoopWriter.scala:83) finished in 0.018 s 21:00:23.177 INFO DAGScheduler - Job 548 is finished. Cancelling potential speculative or zombie tasks for this job 21:00:23.177 INFO TaskSchedulerImpl - Killing all running tasks in stage 1218: Stage finished 21:00:23.177 INFO DAGScheduler - Job 548 finished: runJob at SparkHadoopWriter.scala:83, took 0.025821 s 21:00:23.177 INFO SparkHadoopWriter - Start to commit write Job job_202506122100238601972803909013312_5407. 21:00:23.180 INFO SparkHadoopWriter - Write Job job_202506122100238601972803909013312_5407 committed. Elapsed time: 2 ms. 21:00:23.184 INFO HadoopFileSystemWrapper - Concatenating 2 parts to /tmp/MarkDups7533924236431831406/MarkDups.sam 21:00:23.185 INFO HadoopFileSystemWrapper - Concatenating to /tmp/MarkDups7533924236431831406/MarkDups.sam done 21:00:23.186 INFO MarkDuplicatesSpark - Shutting down engine [June 12, 2025 at 9:00:23 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 21:00:23.201 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 21:00:23.201 INFO MarkDuplicatesSpark - The Genome Analysis Toolkit (GATK) v092bdf2-SNAPSHOT 21:00:23.201 INFO MarkDuplicatesSpark - For support and documentation go to https://software.broadinstitute.org/gatk/ 21:00:23.201 INFO MarkDuplicatesSpark - Executing as root@e24cea241624 on Linux v6.11.0-1015-azure amd64 21:00:23.201 INFO MarkDuplicatesSpark - Java runtime: OpenJDK 64-Bit Server VM v17.0.12+7-Ubuntu-1ubuntu222.04 21:00:23.201 INFO MarkDuplicatesSpark - Start Date/Time: June 12, 2025 at 9:00:23 PM GMT 21:00:23.201 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 21:00:23.201 INFO MarkDuplicatesSpark - ------------------------------------------------------------ 21:00:23.201 INFO MarkDuplicatesSpark - HTSJDK Version: 4.2.0 21:00:23.201 INFO MarkDuplicatesSpark - Picard Version: 3.4.0 21:00:23.201 INFO MarkDuplicatesSpark - Built for Spark Version: 3.5.0 21:00:23.201 INFO MarkDuplicatesSpark - HTSJDK Defaults.COMPRESSION_LEVEL : 2 21:00:23.201 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 21:00:23.201 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 21:00:23.201 INFO MarkDuplicatesSpark - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 21:00:23.201 INFO MarkDuplicatesSpark - Deflater: IntelDeflater 21:00:23.201 INFO MarkDuplicatesSpark - Inflater: IntelInflater 21:00:23.201 INFO MarkDuplicatesSpark - GCS max retries/reopens: 20 21:00:23.201 INFO MarkDuplicatesSpark - Requester pays: disabled 21:00:23.201 INFO MarkDuplicatesSpark - Initializing engine 21:00:23.201 INFO MarkDuplicatesSpark - Done initializing engine 21:00:23.201 INFO MarkDuplicatesSpark - Spark verbosity set to INFO (see --spark-verbosity argument) 21:00:23.203 INFO MemoryStore - Block broadcast_1949 stored as values in memory (estimated size 305.5 KiB, free 1913.6 MiB) 21:00:23.210 INFO MemoryStore - Block broadcast_1949_piece0 stored as bytes in memory (estimated size 64.1 KiB, free 1913.5 MiB) 21:00:23.210 INFO BlockManagerInfo - Added broadcast_1949_piece0 in memory on localhost:43253 (size: 64.1 KiB, free: 1918.5 MiB) 21:00:23.210 INFO SparkContext - Created broadcast 1949 from newAPIHadoopFile at PathSplitSource.java:96 21:00:23.231 INFO MemoryStore - Block broadcast_1950 stored as values in memory (estimated size 305.5 KiB, free 1913.2 MiB) 21:00:23.237 INFO MemoryStore - Block broadcast_1950_piece0 stored as bytes in memory (estimated size 64.1 KiB, free 1913.1 MiB) 21:00:23.237 INFO BlockManagerInfo - Added broadcast_1950_piece0 in memory on localhost:43253 (size: 64.1 KiB, free: 1918.4 MiB) 21:00:23.238 INFO SparkContext - Created broadcast 1950 from newAPIHadoopFile at PathSplitSource.java:96 21:00:23.269 INFO SparkContext - Starting job: collect at SparkUtils.java:205 21:00:23.269 INFO FileInputFormat - Total input files to process : 1 21:00:23.270 INFO DAGScheduler - Registering RDD 5421 (mapToPair at SparkUtils.java:161) as input to shuffle 304 21:00:23.270 INFO DAGScheduler - Got job 549 (collect at SparkUtils.java:205) with 1 output partitions 21:00:23.270 INFO DAGScheduler - Final stage: ResultStage 1220 (collect at SparkUtils.java:205) 21:00:23.270 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1219) 21:00:23.270 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1219) 21:00:23.270 INFO DAGScheduler - Submitting ShuffleMapStage 1219 (MapPartitionsRDD[5421] at mapToPair at SparkUtils.java:161), which has no missing parents 21:00:23.287 INFO MemoryStore - Block broadcast_1951 stored as values in memory (estimated size 480.2 KiB, free 1912.7 MiB) 21:00:23.289 INFO MemoryStore - Block broadcast_1951_piece0 stored as bytes in memory (estimated size 206.2 KiB, free 1912.5 MiB) 21:00:23.289 INFO BlockManagerInfo - Added broadcast_1951_piece0 in memory on localhost:43253 (size: 206.2 KiB, free: 1918.2 MiB) 21:00:23.289 INFO SparkContext - Created broadcast 1951 from broadcast at DAGScheduler.scala:1580 21:00:23.289 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1219 (MapPartitionsRDD[5421] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 21:00:23.289 INFO TaskSchedulerImpl - Adding task set 1219.0 with 1 tasks resource profile 0 21:00:23.290 INFO TaskSetManager - Starting task 0.0 in stage 1219.0 (TID 1043) (localhost, executor driver, partition 0, PROCESS_LOCAL, 9673 bytes) 21:00:23.290 INFO Executor - Running task 0.0 in stage 1219.0 (TID 1043) 21:00:23.317 INFO NewHadoopRDD - Input split: file:/gatkCloneMountPoint/src/test/resources/org/broadinstitute/hellbender/tools/walkers/MarkDuplicatesGATK/inputSingleLibrarySolexa16404.bam:0+5416 21:00:23.322 INFO Executor - Finished task 0.0 in stage 1219.0 (TID 1043). 1148 bytes result sent to driver 21:00:23.322 INFO TaskSetManager - Finished task 0.0 in stage 1219.0 (TID 1043) in 32 ms on localhost (executor driver) (1/1) 21:00:23.322 INFO TaskSchedulerImpl - Removed TaskSet 1219.0, whose tasks have all completed, from pool 21:00:23.322 INFO DAGScheduler - ShuffleMapStage 1219 (mapToPair at SparkUtils.java:161) finished in 0.052 s 21:00:23.322 INFO DAGScheduler - looking for newly runnable stages 21:00:23.322 INFO DAGScheduler - running: HashSet() 21:00:23.322 INFO DAGScheduler - waiting: HashSet(ResultStage 1220) 21:00:23.322 INFO DAGScheduler - failed: HashSet() 21:00:23.322 INFO DAGScheduler - Submitting ResultStage 1220 (MapPartitionsRDD[5424] at mapPartitions at SparkUtils.java:188), which has no missing parents 21:00:23.323 INFO MemoryStore - Block broadcast_1952 stored as values in memory (estimated size 7.4 KiB, free 1912.5 MiB) 21:00:23.323 INFO MemoryStore - Block broadcast_1952_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1912.5 MiB) 21:00:23.323 INFO BlockManagerInfo - Added broadcast_1952_piece0 in memory on localhost:43253 (size: 4.1 KiB, free: 1918.2 MiB) 21:00:23.324 INFO SparkContext - Created broadcast 1952 from broadcast at DAGScheduler.scala:1580 21:00:23.324 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1220 (MapPartitionsRDD[5424] at mapPartitions at SparkUtils.java:188) (first 15 tasks are for partitions Vector(0)) 21:00:23.324 INFO TaskSchedulerImpl - Adding task set 1220.0 with 1 tasks resource profile 0 21:00:23.324 INFO TaskSetManager - Starting task 0.0 in stage 1220.0 (TID 1044) (localhost, executor driver, partition 0, NODE_LOCAL, 9299 bytes) 21:00:23.324 INFO Executor - Running task 0.0 in stage 1220.0 (TID 1044) 21:00:23.325 INFO ShuffleBlockFetcherIterator - Getting 1 (4.3 KiB) non-empty blocks including 1 (4.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.325 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.326 INFO Executor - Finished task 0.0 in stage 1220.0 (TID 1044). 2039 bytes result sent to driver 21:00:23.327 INFO TaskSetManager - Finished task 0.0 in stage 1220.0 (TID 1044) in 2 ms on localhost (executor driver) (1/1) 21:00:23.327 INFO TaskSchedulerImpl - Removed TaskSet 1220.0, whose tasks have all completed, from pool 21:00:23.327 INFO DAGScheduler - ResultStage 1220 (collect at SparkUtils.java:205) finished in 0.004 s 21:00:23.327 INFO DAGScheduler - Job 549 is finished. Cancelling potential speculative or zombie tasks for this job 21:00:23.327 INFO TaskSchedulerImpl - Killing all running tasks in stage 1220: Stage finished 21:00:23.327 INFO DAGScheduler - Job 549 finished: collect at SparkUtils.java:205, took 0.057655 s 21:00:23.331 INFO MemoryStore - Block broadcast_1953 stored as values in memory (estimated size 392.0 B, free 1912.5 MiB) 21:00:23.331 INFO MemoryStore - Block broadcast_1953_piece0 stored as bytes in memory (estimated size 33.0 B, free 1912.5 MiB) 21:00:23.331 INFO BlockManagerInfo - Added broadcast_1953_piece0 in memory on localhost:43253 (size: 33.0 B, free: 1918.2 MiB) 21:00:23.331 INFO SparkContext - Created broadcast 1953 from broadcast at MarkDuplicatesSparkUtils.java:126 21:00:23.331 INFO MemoryStore - Block broadcast_1954 stored as values in memory (estimated size 144.0 B, free 1912.5 MiB) 21:00:23.332 INFO MemoryStore - Block broadcast_1954_piece0 stored as bytes in memory (estimated size 28.0 B, free 1912.5 MiB) 21:00:23.332 INFO BlockManagerInfo - Added broadcast_1954_piece0 in memory on localhost:43253 (size: 28.0 B, free: 1918.2 MiB) 21:00:23.332 INFO SparkContext - Created broadcast 1954 from broadcast at MarkDuplicatesSparkUtils.java:127 21:00:23.344 INFO SparkContext - Starting job: collectAsMap at MarkDuplicatesSparkUtils.java:542 21:00:23.344 INFO DAGScheduler - Registering RDD 5433 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) as input to shuffle 307 21:00:23.345 INFO DAGScheduler - Registering RDD 5437 (mapToPair at MarkDuplicatesSpark.java:217) as input to shuffle 306 21:00:23.345 INFO DAGScheduler - Registering RDD 5441 (mapToPair at MarkDuplicatesSparkUtils.java:497) as input to shuffle 305 21:00:23.345 INFO DAGScheduler - Got job 550 (collectAsMap at MarkDuplicatesSparkUtils.java:542) with 1 output partitions 21:00:23.345 INFO DAGScheduler - Final stage: ResultStage 1225 (collectAsMap at MarkDuplicatesSparkUtils.java:542) 21:00:23.345 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1224) 21:00:23.345 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1224) 21:00:23.345 INFO DAGScheduler - Submitting ShuffleMapStage 1222 (MapPartitionsRDD[5433] at flatMapToPair at MarkDuplicatesSparkUtils.java:130), which has no missing parents 21:00:23.346 INFO MemoryStore - Block broadcast_1955 stored as values in memory (estimated size 45.2 KiB, free 1912.4 MiB) 21:00:23.346 INFO MemoryStore - Block broadcast_1955_piece0 stored as bytes in memory (estimated size 17.1 KiB, free 1912.4 MiB) 21:00:23.346 INFO BlockManagerInfo - Added broadcast_1955_piece0 in memory on localhost:43253 (size: 17.1 KiB, free: 1918.2 MiB) 21:00:23.346 INFO SparkContext - Created broadcast 1955 from broadcast at DAGScheduler.scala:1580 21:00:23.347 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1222 (MapPartitionsRDD[5433] at flatMapToPair at MarkDuplicatesSparkUtils.java:130) (first 15 tasks are for partitions Vector(0)) 21:00:23.347 INFO TaskSchedulerImpl - Adding task set 1222.0 with 1 tasks resource profile 0 21:00:23.347 INFO TaskSetManager - Starting task 0.0 in stage 1222.0 (TID 1045) (localhost, executor driver, partition 0, NODE_LOCAL, 9813 bytes) 21:00:23.347 INFO Executor - Running task 0.0 in stage 1222.0 (TID 1045) 21:00:23.349 INFO ShuffleBlockFetcherIterator - Getting 1 (4.3 KiB) non-empty blocks including 1 (4.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.349 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.353 INFO Executor - Finished task 0.0 in stage 1222.0 (TID 1045). 1922 bytes result sent to driver 21:00:23.353 INFO TaskSetManager - Finished task 0.0 in stage 1222.0 (TID 1045) in 6 ms on localhost (executor driver) (1/1) 21:00:23.353 INFO TaskSchedulerImpl - Removed TaskSet 1222.0, whose tasks have all completed, from pool 21:00:23.353 INFO DAGScheduler - ShuffleMapStage 1222 (flatMapToPair at MarkDuplicatesSparkUtils.java:130) finished in 0.008 s 21:00:23.353 INFO DAGScheduler - looking for newly runnable stages 21:00:23.353 INFO DAGScheduler - running: HashSet() 21:00:23.353 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1223, ShuffleMapStage 1224, ResultStage 1225) 21:00:23.353 INFO DAGScheduler - failed: HashSet() 21:00:23.353 INFO DAGScheduler - Submitting ShuffleMapStage 1223 (MapPartitionsRDD[5437] at mapToPair at MarkDuplicatesSpark.java:217), which has no missing parents 21:00:23.354 INFO MemoryStore - Block broadcast_1956 stored as values in memory (estimated size 49.3 KiB, free 1912.3 MiB) 21:00:23.355 INFO MemoryStore - Block broadcast_1956_piece0 stored as bytes in memory (estimated size 19.3 KiB, free 1912.3 MiB) 21:00:23.355 INFO BlockManagerInfo - Added broadcast_1956_piece0 in memory on localhost:43253 (size: 19.3 KiB, free: 1918.2 MiB) 21:00:23.355 INFO SparkContext - Created broadcast 1956 from broadcast at DAGScheduler.scala:1580 21:00:23.355 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1223 (MapPartitionsRDD[5437] at mapToPair at MarkDuplicatesSpark.java:217) (first 15 tasks are for partitions Vector(0)) 21:00:23.355 INFO TaskSchedulerImpl - Adding task set 1223.0 with 1 tasks resource profile 0 21:00:23.355 INFO TaskSetManager - Starting task 0.0 in stage 1223.0 (TID 1046) (localhost, executor driver, partition 0, NODE_LOCAL, 9288 bytes) 21:00:23.356 INFO Executor - Running task 0.0 in stage 1223.0 (TID 1046) 21:00:23.358 INFO ShuffleBlockFetcherIterator - Getting 1 (593.0 B) non-empty blocks including 1 (593.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.358 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.362 INFO Executor - Finished task 0.0 in stage 1223.0 (TID 1046). 1922 bytes result sent to driver 21:00:23.362 INFO TaskSetManager - Finished task 0.0 in stage 1223.0 (TID 1046) in 7 ms on localhost (executor driver) (1/1) 21:00:23.362 INFO TaskSchedulerImpl - Removed TaskSet 1223.0, whose tasks have all completed, from pool 21:00:23.362 INFO DAGScheduler - ShuffleMapStage 1223 (mapToPair at MarkDuplicatesSpark.java:217) finished in 0.009 s 21:00:23.362 INFO DAGScheduler - looking for newly runnable stages 21:00:23.362 INFO DAGScheduler - running: HashSet() 21:00:23.362 INFO DAGScheduler - waiting: HashSet(ShuffleMapStage 1224, ResultStage 1225) 21:00:23.362 INFO DAGScheduler - failed: HashSet() 21:00:23.362 INFO DAGScheduler - Submitting ShuffleMapStage 1224 (MapPartitionsRDD[5441] at mapToPair at MarkDuplicatesSparkUtils.java:497), which has no missing parents 21:00:23.363 INFO MemoryStore - Block broadcast_1957 stored as values in memory (estimated size 40.0 KiB, free 1912.3 MiB) 21:00:23.364 INFO MemoryStore - Block broadcast_1957_piece0 stored as bytes in memory (estimated size 14.5 KiB, free 1912.3 MiB) 21:00:23.364 INFO BlockManagerInfo - Added broadcast_1957_piece0 in memory on localhost:43253 (size: 14.5 KiB, free: 1918.1 MiB) 21:00:23.364 INFO SparkContext - Created broadcast 1957 from broadcast at DAGScheduler.scala:1580 21:00:23.364 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1224 (MapPartitionsRDD[5441] at mapToPair at MarkDuplicatesSparkUtils.java:497) (first 15 tasks are for partitions Vector(0)) 21:00:23.364 INFO TaskSchedulerImpl - Adding task set 1224.0 with 1 tasks resource profile 0 21:00:23.364 INFO TaskSetManager - Starting task 0.0 in stage 1224.0 (TID 1047) (localhost, executor driver, partition 0, NODE_LOCAL, 9871 bytes) 21:00:23.365 INFO Executor - Running task 0.0 in stage 1224.0 (TID 1047) 21:00:23.366 INFO ShuffleBlockFetcherIterator - Getting 1 (4.3 KiB) non-empty blocks including 1 (4.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.366 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.368 INFO ShuffleBlockFetcherIterator - Getting 1 (652.0 B) non-empty blocks including 1 (652.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.368 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.369 INFO Executor - Finished task 0.0 in stage 1224.0 (TID 1047). 1922 bytes result sent to driver 21:00:23.370 INFO TaskSetManager - Finished task 0.0 in stage 1224.0 (TID 1047) in 6 ms on localhost (executor driver) (1/1) 21:00:23.370 INFO TaskSchedulerImpl - Removed TaskSet 1224.0, whose tasks have all completed, from pool 21:00:23.370 INFO DAGScheduler - ShuffleMapStage 1224 (mapToPair at MarkDuplicatesSparkUtils.java:497) finished in 0.007 s 21:00:23.370 INFO DAGScheduler - looking for newly runnable stages 21:00:23.370 INFO DAGScheduler - running: HashSet() 21:00:23.370 INFO DAGScheduler - waiting: HashSet(ResultStage 1225) 21:00:23.370 INFO DAGScheduler - failed: HashSet() 21:00:23.370 INFO DAGScheduler - Submitting ResultStage 1225 (MapPartitionsRDD[5443] at mapValues at MarkDuplicatesSparkUtils.java:519), which has no missing parents 21:00:23.371 INFO MemoryStore - Block broadcast_1958 stored as values in memory (estimated size 41.2 KiB, free 1912.2 MiB) 21:00:23.371 INFO MemoryStore - Block broadcast_1958_piece0 stored as bytes in memory (estimated size 15.3 KiB, free 1912.2 MiB) 21:00:23.372 INFO BlockManagerInfo - Added broadcast_1958_piece0 in memory on localhost:43253 (size: 15.3 KiB, free: 1918.1 MiB) 21:00:23.372 INFO SparkContext - Created broadcast 1958 from broadcast at DAGScheduler.scala:1580 21:00:23.372 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1225 (MapPartitionsRDD[5443] at mapValues at MarkDuplicatesSparkUtils.java:519) (first 15 tasks are for partitions Vector(0)) 21:00:23.372 INFO TaskSchedulerImpl - Adding task set 1225.0 with 1 tasks resource profile 0 21:00:23.372 INFO TaskSetManager - Starting task 0.0 in stage 1225.0 (TID 1048) (localhost, executor driver, partition 0, NODE_LOCAL, 9299 bytes) 21:00:23.372 INFO Executor - Running task 0.0 in stage 1225.0 (TID 1048) 21:00:23.374 INFO ShuffleBlockFetcherIterator - Getting 1 (142.0 B) non-empty blocks including 1 (142.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.374 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.376 INFO Executor - Finished task 0.0 in stage 1225.0 (TID 1048). 1944 bytes result sent to driver 21:00:23.376 INFO TaskSetManager - Finished task 0.0 in stage 1225.0 (TID 1048) in 4 ms on localhost (executor driver) (1/1) 21:00:23.376 INFO TaskSchedulerImpl - Removed TaskSet 1225.0, whose tasks have all completed, from pool 21:00:23.376 INFO DAGScheduler - ResultStage 1225 (collectAsMap at MarkDuplicatesSparkUtils.java:542) finished in 0.006 s 21:00:23.376 INFO DAGScheduler - Job 550 is finished. Cancelling potential speculative or zombie tasks for this job 21:00:23.376 INFO TaskSchedulerImpl - Killing all running tasks in stage 1225: Stage finished 21:00:23.376 INFO DAGScheduler - Job 550 finished: collectAsMap at MarkDuplicatesSparkUtils.java:542, took 0.031962 s 21:00:23.386 INFO MemoryStore - Block broadcast_1959 stored as values in memory (estimated size 93.5 KiB, free 1912.1 MiB) 21:00:23.387 INFO MemoryStore - Block broadcast_1959_piece0 stored as bytes in memory (estimated size 5.6 KiB, free 1912.1 MiB) 21:00:23.387 INFO BlockManagerInfo - Added broadcast_1959_piece0 in memory on localhost:43253 (size: 5.6 KiB, free: 1918.1 MiB) 21:00:23.387 INFO SparkContext - Created broadcast 1959 from broadcast at ReadsSparkSink.java:133 21:00:23.390 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 21:00:23.390 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 21:00:23.390 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 21:00:23.401 INFO SparkContext - Starting job: runJob at SparkHadoopWriter.scala:83 21:00:23.401 INFO DAGScheduler - Registering RDD 5445 (mapToPair at SparkUtils.java:161) as input to shuffle 308 21:00:23.402 INFO DAGScheduler - Got job 551 (runJob at SparkHadoopWriter.scala:83) with 1 output partitions 21:00:23.402 INFO DAGScheduler - Final stage: ResultStage 1230 (runJob at SparkHadoopWriter.scala:83) 21:00:23.402 INFO DAGScheduler - Parents of final stage: List(ShuffleMapStage 1229) 21:00:23.402 INFO DAGScheduler - Missing parents: List(ShuffleMapStage 1229) 21:00:23.402 INFO DAGScheduler - Submitting ShuffleMapStage 1229 (MapPartitionsRDD[5445] at mapToPair at SparkUtils.java:161), which has no missing parents 21:00:23.402 INFO MemoryStore - Block broadcast_1960 stored as values in memory (estimated size 39.1 KiB, free 1912.1 MiB) 21:00:23.407 INFO MemoryStore - Block broadcast_1960_piece0 stored as bytes in memory (estimated size 14.1 KiB, free 1912.1 MiB) 21:00:23.407 INFO BlockManagerInfo - Added broadcast_1960_piece0 in memory on localhost:43253 (size: 14.1 KiB, free: 1918.1 MiB) 21:00:23.408 INFO SparkContext - Created broadcast 1960 from broadcast at DAGScheduler.scala:1580 21:00:23.408 INFO BlockManagerInfo - Removed broadcast_1922_piece0 on localhost:43253 in memory (size: 71.6 KiB, free: 1918.2 MiB) 21:00:23.408 INFO DAGScheduler - Submitting 1 missing tasks from ShuffleMapStage 1229 (MapPartitionsRDD[5445] at mapToPair at SparkUtils.java:161) (first 15 tasks are for partitions Vector(0)) 21:00:23.408 INFO TaskSchedulerImpl - Adding task set 1229.0 with 1 tasks resource profile 0 21:00:23.408 INFO BlockManagerInfo - Removed broadcast_1921_piece0 on localhost:43253 in memory (size: 8.0 KiB, free: 1918.2 MiB) 21:00:23.408 INFO BlockManagerInfo - Removed broadcast_1919_piece0 on localhost:43253 in memory (size: 9.1 KiB, free: 1918.2 MiB) 21:00:23.408 INFO TaskSetManager - Starting task 0.0 in stage 1229.0 (TID 1049) (localhost, executor driver, partition 0, NODE_LOCAL, 9871 bytes) 21:00:23.409 INFO Executor - Running task 0.0 in stage 1229.0 (TID 1049) 21:00:23.409 INFO BlockManagerInfo - Removed broadcast_1938_piece0 on localhost:43253 in memory (size: 200.0 KiB, free: 1918.4 MiB) 21:00:23.409 INFO BlockManagerInfo - Removed broadcast_1957_piece0 on localhost:43253 in memory (size: 14.5 KiB, free: 1918.4 MiB) 21:00:23.410 INFO BlockManagerInfo - Removed broadcast_1958_piece0 on localhost:43253 in memory (size: 15.3 KiB, free: 1918.4 MiB) 21:00:23.410 INFO BlockManagerInfo - Removed broadcast_1946_piece0 on localhost:43253 in memory (size: 500.0 B, free: 1918.4 MiB) 21:00:23.410 INFO BlockManagerInfo - Removed broadcast_1947_piece0 on localhost:43253 in memory (size: 7.8 KiB, free: 1918.4 MiB) 21:00:23.411 INFO BlockManagerInfo - Removed broadcast_1935_piece0 on localhost:43253 in memory (size: 80.5 KiB, free: 1918.5 MiB) 21:00:23.411 INFO ShuffleBlockFetcherIterator - Getting 1 (4.3 KiB) non-empty blocks including 1 (4.3 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.411 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.411 INFO BlockManagerInfo - Removed broadcast_1911_piece0 on localhost:43253 in memory (size: 64.0 KiB, free: 1918.6 MiB) 21:00:23.411 INFO BlockManagerInfo - Removed broadcast_1913_piece0 on localhost:43253 in memory (size: 4.1 KiB, free: 1918.6 MiB) 21:00:23.412 INFO BlockManagerInfo - Removed broadcast_1929_piece0 on localhost:43253 in memory (size: 19.8 KiB, free: 1918.6 MiB) 21:00:23.412 INFO BlockManagerInfo - Removed broadcast_1956_piece0 on localhost:43253 in memory (size: 19.3 KiB, free: 1918.6 MiB) 21:00:23.412 INFO BlockManagerInfo - Removed broadcast_1916_piece0 on localhost:43253 in memory (size: 11.2 KiB, free: 1918.6 MiB) 21:00:23.413 INFO BlockManagerInfo - Removed broadcast_1955_piece0 on localhost:43253 in memory (size: 17.1 KiB, free: 1918.6 MiB) 21:00:23.413 INFO ShuffleBlockFetcherIterator - Getting 1 (652.0 B) non-empty blocks including 1 (652.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.413 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.413 INFO BlockManagerInfo - Removed broadcast_1914_piece0 on localhost:43253 in memory (size: 19.0 B, free: 1918.6 MiB) 21:00:23.414 INFO BlockManagerInfo - Removed broadcast_1949_piece0 on localhost:43253 in memory (size: 64.1 KiB, free: 1918.7 MiB) 21:00:23.414 INFO BlockManagerInfo - Removed broadcast_1943_piece0 on localhost:43253 in memory (size: 13.1 KiB, free: 1918.7 MiB) 21:00:23.414 INFO BlockManagerInfo - Removed broadcast_1915_piece0 on localhost:43253 in memory (size: 31.0 B, free: 1918.7 MiB) 21:00:23.414 INFO BlockManagerInfo - Removed broadcast_1931_piece0 on localhost:43253 in memory (size: 17.3 KiB, free: 1918.7 MiB) 21:00:23.415 INFO BlockManagerInfo - Removed broadcast_1939_piece0 on localhost:43253 in memory (size: 4.1 KiB, free: 1918.7 MiB) 21:00:23.415 INFO Executor - Finished task 0.0 in stage 1229.0 (TID 1049). 1922 bytes result sent to driver 21:00:23.415 INFO TaskSetManager - Finished task 0.0 in stage 1229.0 (TID 1049) in 7 ms on localhost (executor driver) (1/1) 21:00:23.415 INFO TaskSchedulerImpl - Removed TaskSet 1229.0, whose tasks have all completed, from pool 21:00:23.415 INFO BlockManagerInfo - Removed broadcast_1923_piece0 on localhost:43253 in memory (size: 64.0 KiB, free: 1918.8 MiB) 21:00:23.415 INFO DAGScheduler - ShuffleMapStage 1229 (mapToPair at SparkUtils.java:161) finished in 0.013 s 21:00:23.415 INFO DAGScheduler - looking for newly runnable stages 21:00:23.415 INFO DAGScheduler - running: HashSet() 21:00:23.415 INFO DAGScheduler - waiting: HashSet(ResultStage 1230) 21:00:23.415 INFO DAGScheduler - failed: HashSet() 21:00:23.415 INFO DAGScheduler - Submitting ResultStage 1230 (MapPartitionsRDD[5451] at saveAsTextFile at SamSink.java:65), which has no missing parents 21:00:23.416 INFO BlockManagerInfo - Removed broadcast_1918_piece0 on localhost:43253 in memory (size: 8.5 KiB, free: 1918.8 MiB) 21:00:23.416 INFO BlockManagerInfo - Removed broadcast_1926_piece0 on localhost:43253 in memory (size: 4.1 KiB, free: 1918.8 MiB) 21:00:23.417 INFO BlockManagerInfo - Removed broadcast_1934_piece0 on localhost:43253 in memory (size: 16.8 KiB, free: 1918.8 MiB) 21:00:23.417 INFO BlockManagerInfo - Removed broadcast_1932_piece0 on localhost:43253 in memory (size: 18.0 KiB, free: 1918.8 MiB) 21:00:23.417 INFO BlockManagerInfo - Removed broadcast_1933_piece0 on localhost:43253 in memory (size: 7.9 KiB, free: 1918.9 MiB) 21:00:23.417 INFO BlockManagerInfo - Removed broadcast_1942_piece0 on localhost:43253 in memory (size: 10.9 KiB, free: 1918.9 MiB) 21:00:23.418 INFO BlockManagerInfo - Removed broadcast_1924_piece0 on localhost:43253 in memory (size: 64.0 KiB, free: 1918.9 MiB) 21:00:23.418 INFO BlockManagerInfo - Removed broadcast_1912_piece0 on localhost:43253 in memory (size: 200.3 KiB, free: 1919.1 MiB) 21:00:23.418 INFO BlockManagerInfo - Removed broadcast_1944_piece0 on localhost:43253 in memory (size: 8.3 KiB, free: 1919.1 MiB) 21:00:23.419 INFO BlockManagerInfo - Removed broadcast_1937_piece0 on localhost:43253 in memory (size: 64.0 KiB, free: 1919.2 MiB) 21:00:23.419 INFO BlockManagerInfo - Removed broadcast_1948_piece0 on localhost:43253 in memory (size: 71.6 KiB, free: 1919.3 MiB) 21:00:23.419 INFO BlockManagerInfo - Removed broadcast_1917_piece0 on localhost:43253 in memory (size: 13.2 KiB, free: 1919.3 MiB) 21:00:23.420 INFO BlockManagerInfo - Removed broadcast_1951_piece0 on localhost:43253 in memory (size: 206.2 KiB, free: 1919.5 MiB) 21:00:23.420 INFO BlockManagerInfo - Removed broadcast_1920_piece0 on localhost:43253 in memory (size: 663.0 B, free: 1919.5 MiB) 21:00:23.421 INFO BlockManagerInfo - Removed broadcast_1930_piece0 on localhost:43253 in memory (size: 22.0 KiB, free: 1919.5 MiB) 21:00:23.421 INFO BlockManagerInfo - Removed broadcast_1925_piece0 on localhost:43253 in memory (size: 208.9 KiB, free: 1919.7 MiB) 21:00:23.421 INFO BlockManagerInfo - Removed broadcast_1936_piece0 on localhost:43253 in memory (size: 64.0 KiB, free: 1919.8 MiB) 21:00:23.422 INFO BlockManagerInfo - Removed broadcast_1940_piece0 on localhost:43253 in memory (size: 24.0 B, free: 1919.8 MiB) 21:00:23.422 INFO BlockManagerInfo - Removed broadcast_1910_piece0 on localhost:43253 in memory (size: 64.0 KiB, free: 1919.8 MiB) 21:00:23.422 INFO BlockManagerInfo - Removed broadcast_1927_piece0 on localhost:43253 in memory (size: 540.0 B, free: 1919.8 MiB) 21:00:23.422 INFO BlockManagerInfo - Removed broadcast_1928_piece0 on localhost:43253 in memory (size: 209.0 B, free: 1919.8 MiB) 21:00:23.423 INFO BlockManagerInfo - Removed broadcast_1941_piece0 on localhost:43253 in memory (size: 21.0 B, free: 1919.8 MiB) 21:00:23.423 INFO BlockManagerInfo - Removed broadcast_1945_piece0 on localhost:43253 in memory (size: 9.0 KiB, free: 1919.8 MiB) 21:00:23.423 INFO BlockManagerInfo - Removed broadcast_1952_piece0 on localhost:43253 in memory (size: 4.1 KiB, free: 1919.8 MiB) 21:00:23.425 INFO MemoryStore - Block broadcast_1961 stored as values in memory (estimated size 183.9 KiB, free 1918.9 MiB) 21:00:23.426 INFO MemoryStore - Block broadcast_1961_piece0 stored as bytes in memory (estimated size 77.8 KiB, free 1918.8 MiB) 21:00:23.426 INFO BlockManagerInfo - Added broadcast_1961_piece0 in memory on localhost:43253 (size: 77.8 KiB, free: 1919.8 MiB) 21:00:23.426 INFO SparkContext - Created broadcast 1961 from broadcast at DAGScheduler.scala:1580 21:00:23.426 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 1230 (MapPartitionsRDD[5451] at saveAsTextFile at SamSink.java:65) (first 15 tasks are for partitions Vector(0)) 21:00:23.426 INFO TaskSchedulerImpl - Adding task set 1230.0 with 1 tasks resource profile 0 21:00:23.426 INFO TaskSetManager - Starting task 0.0 in stage 1230.0 (TID 1050) (localhost, executor driver, partition 0, NODE_LOCAL, 9299 bytes) 21:00:23.427 INFO Executor - Running task 0.0 in stage 1230.0 (TID 1050) 21:00:23.430 INFO ShuffleBlockFetcherIterator - Getting 1 (3.9 KiB) non-empty blocks including 1 (3.9 KiB) local and 0 (0.0 B) host-local and 0 (0.0 B) push-merged-local and 0 (0.0 B) remote blocks 21:00:23.430 INFO ShuffleBlockFetcherIterator - Started 0 remote fetches in 0 ms 21:00:23.431 INFO HadoopMapRedCommitProtocol - Using output committer class org.apache.hadoop.mapred.FileOutputCommitter 21:00:23.431 INFO FileOutputCommitter - File Output Committer Algorithm version is 1 21:00:23.431 INFO FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 21:00:23.437 INFO FileOutputCommitter - Saved output of task 'attempt_202506122100234612247393360636310_5451_m_000000_0' to file:/tmp/MarkDups4449371746508496755/MarkDups.sam.parts/_temporary/0/task_202506122100234612247393360636310_5451_m_000000 21:00:23.437 INFO SparkHadoopMapRedUtil - attempt_202506122100234612247393360636310_5451_m_000000_0: Committed. Elapsed time: 0 ms. 21:00:23.437 INFO Executor - Finished task 0.0 in stage 1230.0 (TID 1050). 1858 bytes result sent to driver 21:00:23.438 INFO TaskSetManager - Finished task 0.0 in stage 1230.0 (TID 1050) in 11 ms on localhost (executor driver) (1/1) 21:00:23.438 INFO TaskSchedulerImpl - Removed TaskSet 1230.0, whose tasks have all completed, from pool 21:00:23.438 INFO DAGScheduler - ResultStage 1230 (runJob at SparkHadoopWriter.scala:83) finished in 0.022 s 21:00:23.438 INFO DAGScheduler - Job 551 is finished. Cancelling potential speculative or zombie tasks for this job 21:00:23.438 INFO TaskSchedulerImpl - Killing all running tasks in stage 1230: Stage finished 21:00:23.438 INFO DAGScheduler - Job 551 finished: runJob at SparkHadoopWriter.scala:83, took 0.036664 s 21:00:23.438 INFO SparkHadoopWriter - Start to commit write Job job_202506122100234612247393360636310_5451. 21:00:23.440 INFO SparkHadoopWriter - Write Job job_202506122100234612247393360636310_5451 committed. Elapsed time: 2 ms. 21:00:23.444 INFO HadoopFileSystemWrapper - Concatenating 2 parts to /tmp/MarkDups4449371746508496755/MarkDups.sam 21:00:23.446 INFO HadoopFileSystemWrapper - Concatenating to /tmp/MarkDups4449371746508496755/MarkDups.sam done 21:00:23.447 INFO MarkDuplicatesSpark - Shutting down engine [June 12, 2025 at 9:00:23 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:23 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:24 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:24 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:24 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:25 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:25 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:25 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:25 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:26 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:26 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:26 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:27 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:27 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:27 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:28 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:28 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:28 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:29 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 21:00:29.407 ERROR Executor - Exception in task 0.0 in stage 1450.0 (TID 1197) org.broadinstitute.hellbender.exceptions.UserException$ReadMissingReadGroup: SAM/BAM/CRAM file (unknown) is malformed: Read RUNID:7:1203:2886:82292 is missing the read group (RG) tag, which is required by the GATK. Please use http://gatkforums.broadinstitute.org/discussion/59/companion-utilities-replacereadgroups to fix this problem at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.getLibraryForRead(MarkDuplicatesSparkUtils.java:64) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.broadinstitute.hellbender.utils.read.markduplicates.sparkrecords.EmptyFragment.<init>(EmptyFragment.java:36) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.broadinstitute.hellbender.utils.read.markduplicates.sparkrecords.MarkDuplicatesSparkRecord.newEmptyFragment(MarkDuplicatesSparkRecord.java:44) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.lambda$transformToDuplicateNames$0(MarkDuplicatesSparkUtils.java:141) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at java.base/java.util.stream.ReferencePipeline$15$1.accept(ReferencePipeline.java:540) ~[?:?] at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?] at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?] at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?] at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) ~[?:?] at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?] at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) ~[?:?] at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.lambda$transformToDuplicateNames$5ac2632f$1(MarkDuplicatesSparkUtils.java:150) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.api.java.JavaRDDLike.$anonfun$flatMapToPair$1(JavaRDDLike.scala:143) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at scala.collection.Iterator$$anon$10.nextCur(Iterator.scala:594) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:608) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:104) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.scheduler.Task.run(Task.scala:141) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623) [gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at java.base/java.lang.Thread.run(Thread.java:840) [?:?] 21:00:29.421 ERROR TaskSetManager - Task 0 in stage 1450.0 failed 1 times; aborting job [June 12, 2025 at 9:00:29 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 21:00:29.660 ERROR Executor - Exception in task 0.0 in stage 1457.0 (TID 1200) org.broadinstitute.hellbender.exceptions.UserException$HeaderMissingReadGroup: SAM/BAM/CRAM file (unknown) is malformed: Read RUNID:7:1203:2886:82292 contains an (RG) tag with the group NotADuplicateGroup which is not found in the file header. at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.getLibraryForRead(MarkDuplicatesSparkUtils.java:66) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.broadinstitute.hellbender.utils.read.markduplicates.sparkrecords.EmptyFragment.<init>(EmptyFragment.java:36) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.broadinstitute.hellbender.utils.read.markduplicates.sparkrecords.MarkDuplicatesSparkRecord.newEmptyFragment(MarkDuplicatesSparkRecord.java:44) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.lambda$transformToDuplicateNames$0(MarkDuplicatesSparkUtils.java:141) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at java.base/java.util.stream.ReferencePipeline$15$1.accept(ReferencePipeline.java:540) ~[?:?] at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?] at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) ~[?:?] at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) ~[?:?] at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) ~[?:?] at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[?:?] at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) ~[?:?] at org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSparkUtils.lambda$transformToDuplicateNames$5ac2632f$1(MarkDuplicatesSparkUtils.java:150) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.api.java.JavaRDDLike.$anonfun$flatMapToPair$1(JavaRDDLike.scala:143) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at scala.collection.Iterator$$anon$10.nextCur(Iterator.scala:594) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:608) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:104) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.scheduler.Task.run(Task.scala:141) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94) ~[gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623) [gatk-package-092bdf2-SNAPSHOT-local.jar:092bdf2-SNAPSHOT] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] at java.base/java.lang.Thread.run(Thread.java:840) [?:?] 21:00:29.662 ERROR TaskSetManager - Task 0 in stage 1457.0 failed 1 times; aborting job [June 12, 2025 at 9:00:29 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:29 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:30 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:30 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:30 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:31 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:31 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:31 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:32 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:32 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:32 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:33 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:33 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:33 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:34 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:34 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:34 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:35 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:35 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:35 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:35 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:36 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:36 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:36 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:37 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:37 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:37 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:38 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:38 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:38 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:39 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:39 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:39 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:39 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:40 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:40 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:40 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:41 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:41 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:41 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:42 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:42 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:42 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:43 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:43 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:43 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:44 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:44 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:44 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:44 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:45 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:45 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:45 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:46 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:46 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:46 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:47 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:47 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:47 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:48 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:48 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:48 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:49 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:49 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:49 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:49 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:50 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:50 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:50 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:51 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:51 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:51 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:52 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:52 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:52 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:53 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:53 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:53 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:53 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:54 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:54 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264 [June 12, 2025 at 9:00:54 PM GMT] org.broadinstitute.hellbender.tools.spark.transforms.markduplicates.MarkDuplicatesSpark done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=1535115264