Class org.broadinstitute.hellbender.engine.spark.datasources.ReadsSparkSourceUnitTest

28

tests

0

failures

0

ignored

5.469s

duration

100%

successful

Tests

Test Duration Result
loadReadsNonExistentReference 0s passed
readsSparkSourceTestLenient[0](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam, null) 0.180s passed
readsSparkSourceTestLenient[1](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram, /home/runner/work/gatk/gatk/src/test/resources/human_g1k_v37.chr17_1Mb.fasta) 0.150s passed
readsSparkSourceTestSilent[0](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam, null) 0.168s passed
readsSparkSourceTestSilent[1](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram, /home/runner/work/gatk/gatk/src/test/resources/human_g1k_v37.chr17_1Mb.fasta) 0.148s passed
readsSparkSourceTestStrict[0](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam, null) 0.040s passed
readsSparkSourceTestStrict[1](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram, /home/runner/work/gatk/gatk/src/test/resources/human_g1k_v37.chr17_1Mb.fasta) 0.028s passed
readsSparkSourceTest[0](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam, null) 0.188s passed
readsSparkSourceTest[1](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam, null) 0.176s passed
readsSparkSourceTest[2](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.bam, null) 0.180s passed
readsSparkSourceTest[3](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram, /home/runner/work/gatk/gatk/src/test/resources/human_g1k_v37.chr17_1Mb.fasta) 0.154s passed
readsSparkSourceTest[4](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.cram, /home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.fasta) 0.128s passed
readsSparkSourceUnknownHostTest 0.005s passed
shardedReadsSparkSourceTest[0](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.bam, /home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.sharded.bam, null) 0.206s passed
testCRAMReferenceFromHDFS 1.629s passed
testCRAMReferenceRequired 0s passed
testCheckCRAMReference 0s passed
testGetHeader[0](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam, null) 0.037s passed
testGetHeader[1](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam, null) 0.034s passed
testGetHeader[2](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.bam, null) 0.028s passed
testGetHeader[3](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram, /home/runner/work/gatk/gatk/src/test/resources/human_g1k_v37.chr17_1Mb.fasta) 0.023s passed
testGetHeader[4](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.cram, /home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.fasta) 0.023s passed
testHeadersAreStripped 0.135s passed
testIntervals 0.095s passed
testIntervalsWithUnmapped 0.088s passed
testPartitionSizing 0.064s passed
testReadFromFileAndHDFS 1.562s passed
testReject2BitCRAMReference 0s passed

Standard error

20:33:54.710 INFO  MemoryStore - Block broadcast_536 stored as values in memory (estimated size 297.9 KiB, free 1918.3 MiB)
20:33:54.720 INFO  MemoryStore - Block broadcast_536_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.2 MiB)
20:33:54.720 INFO  BlockManagerInfo - Added broadcast_536_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.6 MiB)
20:33:54.721 INFO  SparkContext - Created broadcast 536 from newAPIHadoopFile at PathSplitSource.java:96
20:33:54.758 INFO  MemoryStore - Block broadcast_537 stored as values in memory (estimated size 297.9 KiB, free 1917.9 MiB)
20:33:54.765 INFO  MemoryStore - Block broadcast_537_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.9 MiB)
20:33:54.765 INFO  BlockManagerInfo - Added broadcast_537_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.6 MiB)
20:33:54.766 INFO  SparkContext - Created broadcast 537 from newAPIHadoopFile at PathSplitSource.java:96
20:33:54.789 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:33:54.789 INFO  DAGScheduler - Got job 201 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:33:54.789 INFO  DAGScheduler - Final stage: ResultStage 265 (collect at ReadsSparkSourceUnitTest.java:111)
20:33:54.789 INFO  DAGScheduler - Parents of final stage: List()
20:33:54.789 INFO  DAGScheduler - Missing parents: List()
20:33:54.789 INFO  DAGScheduler - Submitting ResultStage 265 (ParallelCollectionRDD[1270] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:33:54.790 INFO  MemoryStore - Block broadcast_538 stored as values in memory (estimated size 3.0 KiB, free 1917.9 MiB)
20:33:54.790 INFO  MemoryStore - Block broadcast_538_piece0 stored as bytes in memory (estimated size 1777.0 B, free 1917.9 MiB)
20:33:54.790 INFO  BlockManagerInfo - Added broadcast_538_piece0 in memory on localhost:45281 (size: 1777.0 B, free: 1919.6 MiB)
20:33:54.790 INFO  SparkContext - Created broadcast 538 from broadcast at DAGScheduler.scala:1580
20:33:54.790 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 265 (ParallelCollectionRDD[1270] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:33:54.790 INFO  TaskSchedulerImpl - Adding task set 265.0 with 4 tasks resource profile 0
20:33:54.791 INFO  TaskSetManager - Starting task 0.0 in stage 265.0 (TID 321) (localhost, executor driver, partition 0, PROCESS_LOCAL, 47080 bytes) 
20:33:54.791 INFO  TaskSetManager - Starting task 1.0 in stage 265.0 (TID 322) (localhost, executor driver, partition 1, PROCESS_LOCAL, 47194 bytes) 
20:33:54.792 INFO  TaskSetManager - Starting task 2.0 in stage 265.0 (TID 323) (localhost, executor driver, partition 2, PROCESS_LOCAL, 47105 bytes) 
20:33:54.792 INFO  TaskSetManager - Starting task 3.0 in stage 265.0 (TID 324) (localhost, executor driver, partition 3, PROCESS_LOCAL, 47530 bytes) 
20:33:54.792 INFO  Executor - Running task 1.0 in stage 265.0 (TID 322)
20:33:54.792 INFO  Executor - Running task 0.0 in stage 265.0 (TID 321)
20:33:54.792 INFO  Executor - Running task 2.0 in stage 265.0 (TID 323)
20:33:54.793 INFO  Executor - Running task 3.0 in stage 265.0 (TID 324)
20:33:54.794 INFO  Executor - Finished task 0.0 in stage 265.0 (TID 321). 40312 bytes result sent to driver
20:33:54.794 INFO  Executor - Finished task 1.0 in stage 265.0 (TID 322). 40426 bytes result sent to driver
20:33:54.794 INFO  Executor - Finished task 3.0 in stage 265.0 (TID 324). 40762 bytes result sent to driver
20:33:54.794 INFO  Executor - Finished task 2.0 in stage 265.0 (TID 323). 40380 bytes result sent to driver
20:33:54.794 INFO  TaskSetManager - Finished task 0.0 in stage 265.0 (TID 321) in 3 ms on localhost (executor driver) (1/4)
20:33:54.795 INFO  TaskSetManager - Finished task 3.0 in stage 265.0 (TID 324) in 3 ms on localhost (executor driver) (2/4)
20:33:54.795 INFO  TaskSetManager - Finished task 2.0 in stage 265.0 (TID 323) in 4 ms on localhost (executor driver) (3/4)
20:33:54.795 INFO  TaskSetManager - Finished task 1.0 in stage 265.0 (TID 322) in 4 ms on localhost (executor driver) (4/4)
20:33:54.795 INFO  TaskSchedulerImpl - Removed TaskSet 265.0, whose tasks have all completed, from pool 
20:33:54.795 INFO  DAGScheduler - ResultStage 265 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.006 s
20:33:54.795 INFO  DAGScheduler - Job 201 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:54.795 INFO  TaskSchedulerImpl - Killing all running tasks in stage 265: Stage finished
20:33:54.795 INFO  DAGScheduler - Job 201 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.006717 s
20:33:54.796 INFO  FileInputFormat - Total input files to process : 1
20:33:54.835 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:33:54.835 INFO  DAGScheduler - Got job 202 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:33:54.835 INFO  DAGScheduler - Final stage: ResultStage 266 (collect at ReadsSparkSourceUnitTest.java:112)
20:33:54.835 INFO  DAGScheduler - Parents of final stage: List()
20:33:54.835 INFO  DAGScheduler - Missing parents: List()
20:33:54.835 INFO  DAGScheduler - Submitting ResultStage 266 (MapPartitionsRDD[1277] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:54.852 INFO  MemoryStore - Block broadcast_539 stored as values in memory (estimated size 426.2 KiB, free 1917.5 MiB)
20:33:54.854 INFO  MemoryStore - Block broadcast_539_piece0 stored as bytes in memory (estimated size 153.6 KiB, free 1917.3 MiB)
20:33:54.854 INFO  BlockManagerInfo - Added broadcast_539_piece0 in memory on localhost:45281 (size: 153.6 KiB, free: 1919.4 MiB)
20:33:54.854 INFO  SparkContext - Created broadcast 539 from broadcast at DAGScheduler.scala:1580
20:33:54.854 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 266 (MapPartitionsRDD[1277] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:33:54.854 INFO  TaskSchedulerImpl - Adding task set 266.0 with 1 tasks resource profile 0
20:33:54.855 INFO  TaskSetManager - Starting task 0.0 in stage 266.0 (TID 325) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes) 
20:33:54.855 INFO  Executor - Running task 0.0 in stage 266.0 (TID 325)
20:33:54.891 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
20:33:54.894 INFO  Executor - Finished task 0.0 in stage 266.0 (TID 325). 159572 bytes result sent to driver
20:33:54.895 INFO  TaskSetManager - Finished task 0.0 in stage 266.0 (TID 325) in 40 ms on localhost (executor driver) (1/1)
20:33:54.895 INFO  TaskSchedulerImpl - Removed TaskSet 266.0, whose tasks have all completed, from pool 
20:33:54.895 INFO  DAGScheduler - ResultStage 266 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.060 s
20:33:54.895 INFO  DAGScheduler - Job 202 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:54.895 INFO  TaskSchedulerImpl - Killing all running tasks in stage 266: Stage finished
20:33:54.895 INFO  DAGScheduler - Job 202 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.060355 s
20:33:54.899 INFO  MemoryStore - Block broadcast_540 stored as values in memory (estimated size 298.0 KiB, free 1917.0 MiB)
20:33:54.905 INFO  MemoryStore - Block broadcast_540_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.0 MiB)
20:33:54.905 INFO  BlockManagerInfo - Added broadcast_540_piece0 in memory on localhost:45281 (size: 50.3 KiB, free: 1919.4 MiB)
20:33:54.905 INFO  SparkContext - Created broadcast 540 from newAPIHadoopFile at PathSplitSource.java:96
20:33:54.929 INFO  MemoryStore - Block broadcast_541 stored as values in memory (estimated size 298.0 KiB, free 1916.7 MiB)
20:33:54.935 INFO  MemoryStore - Block broadcast_541_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1916.6 MiB)
20:33:54.935 INFO  BlockManagerInfo - Added broadcast_541_piece0 in memory on localhost:45281 (size: 50.3 KiB, free: 1919.4 MiB)
20:33:54.935 INFO  SparkContext - Created broadcast 541 from newAPIHadoopFile at PathSplitSource.java:96
20:33:54.957 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:33:54.958 INFO  DAGScheduler - Got job 203 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:33:54.958 INFO  DAGScheduler - Final stage: ResultStage 267 (collect at ReadsSparkSourceUnitTest.java:111)
20:33:54.958 INFO  DAGScheduler - Parents of final stage: List()
20:33:54.958 INFO  DAGScheduler - Missing parents: List()
20:33:54.958 INFO  DAGScheduler - Submitting ResultStage 267 (ParallelCollectionRDD[1283] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:33:54.958 INFO  MemoryStore - Block broadcast_542 stored as values in memory (estimated size 3.0 KiB, free 1916.6 MiB)
20:33:54.959 INFO  MemoryStore - Block broadcast_542_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1916.6 MiB)
20:33:54.959 INFO  BlockManagerInfo - Added broadcast_542_piece0 in memory on localhost:45281 (size: 1778.0 B, free: 1919.3 MiB)
20:33:54.959 INFO  SparkContext - Created broadcast 542 from broadcast at DAGScheduler.scala:1580
20:33:54.959 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 267 (ParallelCollectionRDD[1283] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:33:54.959 INFO  TaskSchedulerImpl - Adding task set 267.0 with 4 tasks resource profile 0
20:33:54.960 INFO  TaskSetManager - Starting task 0.0 in stage 267.0 (TID 326) (localhost, executor driver, partition 0, PROCESS_LOCAL, 169886 bytes) 
20:33:54.960 INFO  TaskSetManager - Starting task 1.0 in stage 267.0 (TID 327) (localhost, executor driver, partition 1, PROCESS_LOCAL, 170093 bytes) 
20:33:54.960 INFO  TaskSetManager - Starting task 2.0 in stage 267.0 (TID 328) (localhost, executor driver, partition 2, PROCESS_LOCAL, 170081 bytes) 
20:33:54.961 INFO  TaskSetManager - Starting task 3.0 in stage 267.0 (TID 329) (localhost, executor driver, partition 3, PROCESS_LOCAL, 169419 bytes) 
20:33:54.961 INFO  Executor - Running task 1.0 in stage 267.0 (TID 327)
20:33:54.961 INFO  Executor - Running task 0.0 in stage 267.0 (TID 326)
20:33:54.961 INFO  Executor - Running task 3.0 in stage 267.0 (TID 329)
20:33:54.961 INFO  Executor - Running task 2.0 in stage 267.0 (TID 328)
20:33:54.963 INFO  Executor - Finished task 1.0 in stage 267.0 (TID 327). 163368 bytes result sent to driver
20:33:54.963 INFO  Executor - Finished task 3.0 in stage 267.0 (TID 329). 162694 bytes result sent to driver
20:33:54.963 INFO  Executor - Finished task 0.0 in stage 267.0 (TID 326). 163161 bytes result sent to driver
20:33:54.964 INFO  Executor - Finished task 2.0 in stage 267.0 (TID 328). 163356 bytes result sent to driver
20:33:54.964 INFO  TaskSetManager - Finished task 1.0 in stage 267.0 (TID 327) in 4 ms on localhost (executor driver) (1/4)
20:33:54.964 INFO  TaskSetManager - Finished task 0.0 in stage 267.0 (TID 326) in 5 ms on localhost (executor driver) (2/4)
20:33:54.964 INFO  TaskSetManager - Finished task 3.0 in stage 267.0 (TID 329) in 4 ms on localhost (executor driver) (3/4)
20:33:54.965 INFO  TaskSetManager - Finished task 2.0 in stage 267.0 (TID 328) in 5 ms on localhost (executor driver) (4/4)
20:33:54.965 INFO  TaskSchedulerImpl - Removed TaskSet 267.0, whose tasks have all completed, from pool 
20:33:54.965 INFO  DAGScheduler - ResultStage 267 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.007 s
20:33:54.965 INFO  DAGScheduler - Job 203 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:54.965 INFO  TaskSchedulerImpl - Killing all running tasks in stage 267: Stage finished
20:33:54.965 INFO  DAGScheduler - Job 203 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.007717 s
20:33:54.966 INFO  FileInputFormat - Total input files to process : 1
20:33:55.002 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:33:55.002 INFO  DAGScheduler - Got job 204 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:33:55.002 INFO  DAGScheduler - Final stage: ResultStage 268 (collect at ReadsSparkSourceUnitTest.java:112)
20:33:55.002 INFO  DAGScheduler - Parents of final stage: List()
20:33:55.002 INFO  DAGScheduler - Missing parents: List()
20:33:55.002 INFO  DAGScheduler - Submitting ResultStage 268 (MapPartitionsRDD[1290] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:55.019 INFO  MemoryStore - Block broadcast_543 stored as values in memory (estimated size 426.2 KiB, free 1916.2 MiB)
20:33:55.024 INFO  BlockManagerInfo - Removed broadcast_539_piece0 on localhost:45281 in memory (size: 153.6 KiB, free: 1919.5 MiB)
20:33:55.024 INFO  BlockManagerInfo - Removed broadcast_534_piece0 on localhost:45281 in memory (size: 153.6 KiB, free: 1919.6 MiB)
20:33:55.024 INFO  BlockManagerInfo - Removed broadcast_532_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.7 MiB)
20:33:55.025 INFO  MemoryStore - Block broadcast_543_piece0 stored as bytes in memory (estimated size 153.6 KiB, free 1917.5 MiB)
20:33:55.025 INFO  BlockManagerInfo - Added broadcast_543_piece0 in memory on localhost:45281 (size: 153.6 KiB, free: 1919.5 MiB)
20:33:55.025 INFO  SparkContext - Created broadcast 543 from broadcast at DAGScheduler.scala:1580
20:33:55.025 INFO  BlockManagerInfo - Removed broadcast_538_piece0 on localhost:45281 in memory (size: 1777.0 B, free: 1919.5 MiB)
20:33:55.025 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 268 (MapPartitionsRDD[1290] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:33:55.025 INFO  TaskSchedulerImpl - Adding task set 268.0 with 1 tasks resource profile 0
20:33:55.025 INFO  BlockManagerInfo - Removed broadcast_531_piece0 on localhost:45281 in memory (size: 233.0 B, free: 1919.5 MiB)
20:33:55.026 INFO  BlockManagerInfo - Removed broadcast_542_piece0 on localhost:45281 in memory (size: 1778.0 B, free: 1919.6 MiB)
20:33:55.026 INFO  TaskSetManager - Starting task 0.0 in stage 268.0 (TID 330) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7883 bytes) 
20:33:55.026 INFO  Executor - Running task 0.0 in stage 268.0 (TID 330)
20:33:55.026 INFO  BlockManagerInfo - Removed broadcast_537_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.6 MiB)
20:33:55.026 INFO  BlockManagerInfo - Removed broadcast_525_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.6 MiB)
20:33:55.027 INFO  BlockManagerInfo - Removed broadcast_535_piece0 on localhost:45281 in memory (size: 54.5 KiB, free: 1919.7 MiB)
20:33:55.027 INFO  BlockManagerInfo - Removed broadcast_540_piece0 on localhost:45281 in memory (size: 50.3 KiB, free: 1919.8 MiB)
20:33:55.027 INFO  BlockManagerInfo - Removed broadcast_536_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.8 MiB)
20:33:55.058 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam:0+221635
20:33:55.068 INFO  Executor - Finished task 0.0 in stage 268.0 (TID 330). 650141 bytes result sent to driver
20:33:55.070 INFO  TaskSetManager - Finished task 0.0 in stage 268.0 (TID 330) in 44 ms on localhost (executor driver) (1/1)
20:33:55.070 INFO  TaskSchedulerImpl - Removed TaskSet 268.0, whose tasks have all completed, from pool 
20:33:55.070 INFO  DAGScheduler - ResultStage 268 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.067 s
20:33:55.070 INFO  DAGScheduler - Job 204 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:55.070 INFO  TaskSchedulerImpl - Killing all running tasks in stage 268: Stage finished
20:33:55.070 INFO  DAGScheduler - Job 204 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.068332 s
20:33:55.074 INFO  MemoryStore - Block broadcast_544 stored as values in memory (estimated size 298.0 KiB, free 1918.8 MiB)
20:33:55.085 INFO  MemoryStore - Block broadcast_544_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.8 MiB)
20:33:55.085 INFO  BlockManagerInfo - Added broadcast_544_piece0 in memory on localhost:45281 (size: 50.3 KiB, free: 1919.8 MiB)
20:33:55.085 INFO  SparkContext - Created broadcast 544 from newAPIHadoopFile at PathSplitSource.java:96
20:33:55.115 INFO  MemoryStore - Block broadcast_545 stored as values in memory (estimated size 298.0 KiB, free 1918.5 MiB)
20:33:55.121 INFO  MemoryStore - Block broadcast_545_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.4 MiB)
20:33:55.121 INFO  BlockManagerInfo - Added broadcast_545_piece0 in memory on localhost:45281 (size: 50.3 KiB, free: 1919.7 MiB)
20:33:55.122 INFO  SparkContext - Created broadcast 545 from newAPIHadoopFile at PathSplitSource.java:96
20:33:55.144 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:33:55.145 INFO  DAGScheduler - Got job 205 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:33:55.145 INFO  DAGScheduler - Final stage: ResultStage 269 (collect at ReadsSparkSourceUnitTest.java:111)
20:33:55.145 INFO  DAGScheduler - Parents of final stage: List()
20:33:55.145 INFO  DAGScheduler - Missing parents: List()
20:33:55.145 INFO  DAGScheduler - Submitting ResultStage 269 (ParallelCollectionRDD[1296] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:33:55.145 INFO  MemoryStore - Block broadcast_546 stored as values in memory (estimated size 3.0 KiB, free 1918.4 MiB)
20:33:55.146 INFO  MemoryStore - Block broadcast_546_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.4 MiB)
20:33:55.146 INFO  BlockManagerInfo - Added broadcast_546_piece0 in memory on localhost:45281 (size: 1778.0 B, free: 1919.7 MiB)
20:33:55.146 INFO  SparkContext - Created broadcast 546 from broadcast at DAGScheduler.scala:1580
20:33:55.146 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 269 (ParallelCollectionRDD[1296] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:33:55.146 INFO  TaskSchedulerImpl - Adding task set 269.0 with 4 tasks resource profile 0
20:33:55.147 INFO  TaskSetManager - Starting task 0.0 in stage 269.0 (TID 331) (localhost, executor driver, partition 0, PROCESS_LOCAL, 169886 bytes) 
20:33:55.147 INFO  TaskSetManager - Starting task 1.0 in stage 269.0 (TID 332) (localhost, executor driver, partition 1, PROCESS_LOCAL, 170093 bytes) 
20:33:55.147 INFO  TaskSetManager - Starting task 2.0 in stage 269.0 (TID 333) (localhost, executor driver, partition 2, PROCESS_LOCAL, 170081 bytes) 
20:33:55.148 INFO  TaskSetManager - Starting task 3.0 in stage 269.0 (TID 334) (localhost, executor driver, partition 3, PROCESS_LOCAL, 169419 bytes) 
20:33:55.148 INFO  Executor - Running task 3.0 in stage 269.0 (TID 334)
20:33:55.148 INFO  Executor - Running task 0.0 in stage 269.0 (TID 331)
20:33:55.148 INFO  Executor - Running task 2.0 in stage 269.0 (TID 333)
20:33:55.148 INFO  Executor - Running task 1.0 in stage 269.0 (TID 332)
20:33:55.150 INFO  Executor - Finished task 3.0 in stage 269.0 (TID 334). 162694 bytes result sent to driver
20:33:55.150 INFO  Executor - Finished task 2.0 in stage 269.0 (TID 333). 163313 bytes result sent to driver
20:33:55.150 INFO  Executor - Finished task 1.0 in stage 269.0 (TID 332). 163325 bytes result sent to driver
20:33:55.150 INFO  TaskSetManager - Finished task 3.0 in stage 269.0 (TID 334) in 3 ms on localhost (executor driver) (1/4)
20:33:55.151 INFO  TaskSetManager - Finished task 2.0 in stage 269.0 (TID 333) in 4 ms on localhost (executor driver) (2/4)
20:33:55.151 INFO  TaskSetManager - Finished task 1.0 in stage 269.0 (TID 332) in 4 ms on localhost (executor driver) (3/4)
20:33:55.151 INFO  Executor - Finished task 0.0 in stage 269.0 (TID 331). 163161 bytes result sent to driver
20:33:55.152 INFO  TaskSetManager - Finished task 0.0 in stage 269.0 (TID 331) in 6 ms on localhost (executor driver) (4/4)
20:33:55.152 INFO  TaskSchedulerImpl - Removed TaskSet 269.0, whose tasks have all completed, from pool 
20:33:55.152 INFO  DAGScheduler - ResultStage 269 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.007 s
20:33:55.152 INFO  DAGScheduler - Job 205 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:55.152 INFO  TaskSchedulerImpl - Killing all running tasks in stage 269: Stage finished
20:33:55.152 INFO  DAGScheduler - Job 205 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.007943 s
20:33:55.153 INFO  FileInputFormat - Total input files to process : 1
20:33:55.189 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:33:55.190 INFO  DAGScheduler - Got job 206 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:33:55.190 INFO  DAGScheduler - Final stage: ResultStage 270 (collect at ReadsSparkSourceUnitTest.java:112)
20:33:55.190 INFO  DAGScheduler - Parents of final stage: List()
20:33:55.190 INFO  DAGScheduler - Missing parents: List()
20:33:55.190 INFO  DAGScheduler - Submitting ResultStage 270 (MapPartitionsRDD[1303] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:55.207 INFO  MemoryStore - Block broadcast_547 stored as values in memory (estimated size 426.2 KiB, free 1918.0 MiB)
20:33:55.208 INFO  MemoryStore - Block broadcast_547_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1917.8 MiB)
20:33:55.208 INFO  BlockManagerInfo - Added broadcast_547_piece0 in memory on localhost:45281 (size: 153.7 KiB, free: 1919.6 MiB)
20:33:55.208 INFO  SparkContext - Created broadcast 547 from broadcast at DAGScheduler.scala:1580
20:33:55.208 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 270 (MapPartitionsRDD[1303] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:33:55.208 INFO  TaskSchedulerImpl - Adding task set 270.0 with 1 tasks resource profile 0
20:33:55.209 INFO  TaskSetManager - Starting task 0.0 in stage 270.0 (TID 335) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7912 bytes) 
20:33:55.209 INFO  Executor - Running task 0.0 in stage 270.0 (TID 335)
20:33:55.238 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.bam:0+216896
20:33:55.249 INFO  Executor - Finished task 0.0 in stage 270.0 (TID 335). 650141 bytes result sent to driver
20:33:55.251 INFO  TaskSetManager - Finished task 0.0 in stage 270.0 (TID 335) in 42 ms on localhost (executor driver) (1/1)
20:33:55.251 INFO  TaskSchedulerImpl - Removed TaskSet 270.0, whose tasks have all completed, from pool 
20:33:55.251 INFO  DAGScheduler - ResultStage 270 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.061 s
20:33:55.251 INFO  DAGScheduler - Job 206 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:55.251 INFO  TaskSchedulerImpl - Killing all running tasks in stage 270: Stage finished
20:33:55.251 INFO  DAGScheduler - Job 206 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.061634 s
20:33:55.253 INFO  MemoryStore - Block broadcast_548 stored as values in memory (estimated size 536.0 B, free 1917.8 MiB)
20:33:55.254 INFO  MemoryStore - Block broadcast_548_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.8 MiB)
20:33:55.254 INFO  BlockManagerInfo - Added broadcast_548_piece0 in memory on localhost:45281 (size: 187.0 B, free: 1919.6 MiB)
20:33:55.254 INFO  SparkContext - Created broadcast 548 from broadcast at CramSource.java:114
20:33:55.255 INFO  MemoryStore - Block broadcast_549 stored as values in memory (estimated size 297.9 KiB, free 1917.6 MiB)
20:33:55.261 INFO  MemoryStore - Block broadcast_549_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.5 MiB)
20:33:55.261 INFO  BlockManagerInfo - Added broadcast_549_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.5 MiB)
20:33:55.261 INFO  SparkContext - Created broadcast 549 from newAPIHadoopFile at PathSplitSource.java:96
20:33:55.280 INFO  MemoryStore - Block broadcast_550 stored as values in memory (estimated size 536.0 B, free 1917.5 MiB)
20:33:55.281 INFO  MemoryStore - Block broadcast_550_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.5 MiB)
20:33:55.281 INFO  BlockManagerInfo - Added broadcast_550_piece0 in memory on localhost:45281 (size: 187.0 B, free: 1919.5 MiB)
20:33:55.281 INFO  SparkContext - Created broadcast 550 from broadcast at CramSource.java:114
20:33:55.282 INFO  MemoryStore - Block broadcast_551 stored as values in memory (estimated size 297.9 KiB, free 1917.2 MiB)
20:33:55.288 INFO  MemoryStore - Block broadcast_551_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.2 MiB)
20:33:55.288 INFO  BlockManagerInfo - Added broadcast_551_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.5 MiB)
20:33:55.288 INFO  SparkContext - Created broadcast 551 from newAPIHadoopFile at PathSplitSource.java:96
20:33:55.305 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:33:55.305 INFO  DAGScheduler - Got job 207 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:33:55.305 INFO  DAGScheduler - Final stage: ResultStage 271 (collect at ReadsSparkSourceUnitTest.java:111)
20:33:55.305 INFO  DAGScheduler - Parents of final stage: List()
20:33:55.305 INFO  DAGScheduler - Missing parents: List()
20:33:55.305 INFO  DAGScheduler - Submitting ResultStage 271 (ParallelCollectionRDD[1308] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:33:55.306 INFO  MemoryStore - Block broadcast_552 stored as values in memory (estimated size 3.0 KiB, free 1917.2 MiB)
20:33:55.306 INFO  MemoryStore - Block broadcast_552_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1917.2 MiB)
20:33:55.306 INFO  BlockManagerInfo - Added broadcast_552_piece0 in memory on localhost:45281 (size: 1778.0 B, free: 1919.5 MiB)
20:33:55.306 INFO  SparkContext - Created broadcast 552 from broadcast at DAGScheduler.scala:1580
20:33:55.306 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 271 (ParallelCollectionRDD[1308] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:33:55.306 INFO  TaskSchedulerImpl - Adding task set 271.0 with 4 tasks resource profile 0
20:33:55.307 INFO  TaskSetManager - Starting task 0.0 in stage 271.0 (TID 336) (localhost, executor driver, partition 0, PROCESS_LOCAL, 45675 bytes) 
20:33:55.307 INFO  TaskSetManager - Starting task 1.0 in stage 271.0 (TID 337) (localhost, executor driver, partition 1, PROCESS_LOCAL, 45855 bytes) 
20:33:55.308 INFO  TaskSetManager - Starting task 2.0 in stage 271.0 (TID 338) (localhost, executor driver, partition 2, PROCESS_LOCAL, 45795 bytes) 
20:33:55.308 INFO  TaskSetManager - Starting task 3.0 in stage 271.0 (TID 339) (localhost, executor driver, partition 3, PROCESS_LOCAL, 46068 bytes) 
20:33:55.308 INFO  Executor - Running task 1.0 in stage 271.0 (TID 337)
20:33:55.308 INFO  Executor - Running task 3.0 in stage 271.0 (TID 339)
20:33:55.308 INFO  Executor - Running task 2.0 in stage 271.0 (TID 338)
20:33:55.308 INFO  Executor - Running task 0.0 in stage 271.0 (TID 336)
20:33:55.314 INFO  Executor - Finished task 2.0 in stage 271.0 (TID 338). 39113 bytes result sent to driver
20:33:55.314 INFO  Executor - Finished task 0.0 in stage 271.0 (TID 336). 38993 bytes result sent to driver
20:33:55.318 INFO  Executor - Finished task 3.0 in stage 271.0 (TID 339). 39386 bytes result sent to driver
20:33:55.318 INFO  Executor - Finished task 1.0 in stage 271.0 (TID 337). 39173 bytes result sent to driver
20:33:55.318 INFO  TaskSetManager - Finished task 2.0 in stage 271.0 (TID 338) in 11 ms on localhost (executor driver) (1/4)
20:33:55.318 INFO  BlockManagerInfo - Removed broadcast_549_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.5 MiB)
20:33:55.318 INFO  TaskSetManager - Finished task 0.0 in stage 271.0 (TID 336) in 11 ms on localhost (executor driver) (2/4)
20:33:55.319 INFO  TaskSetManager - Finished task 3.0 in stage 271.0 (TID 339) in 11 ms on localhost (executor driver) (3/4)
20:33:55.319 INFO  TaskSetManager - Finished task 1.0 in stage 271.0 (TID 337) in 12 ms on localhost (executor driver) (4/4)
20:33:55.319 INFO  TaskSchedulerImpl - Removed TaskSet 271.0, whose tasks have all completed, from pool 
20:33:55.319 INFO  DAGScheduler - ResultStage 271 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.014 s
20:33:55.319 INFO  DAGScheduler - Job 207 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:55.319 INFO  TaskSchedulerImpl - Killing all running tasks in stage 271: Stage finished
20:33:55.319 INFO  DAGScheduler - Job 207 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.014443 s
20:33:55.319 INFO  BlockManagerInfo - Removed broadcast_544_piece0 on localhost:45281 in memory (size: 50.3 KiB, free: 1919.5 MiB)
20:33:55.320 INFO  BlockManagerInfo - Removed broadcast_541_piece0 on localhost:45281 in memory (size: 50.3 KiB, free: 1919.6 MiB)
20:33:55.320 INFO  BlockManagerInfo - Removed broadcast_543_piece0 on localhost:45281 in memory (size: 153.6 KiB, free: 1919.7 MiB)
20:33:55.320 INFO  FileInputFormat - Total input files to process : 1
20:33:55.320 INFO  BlockManagerInfo - Removed broadcast_546_piece0 on localhost:45281 in memory (size: 1778.0 B, free: 1919.7 MiB)
20:33:55.321 INFO  BlockManagerInfo - Removed broadcast_545_piece0 on localhost:45281 in memory (size: 50.3 KiB, free: 1919.8 MiB)
20:33:55.321 INFO  BlockManagerInfo - Removed broadcast_547_piece0 on localhost:45281 in memory (size: 153.7 KiB, free: 1919.9 MiB)
20:33:55.321 INFO  BlockManagerInfo - Removed broadcast_548_piece0 on localhost:45281 in memory (size: 187.0 B, free: 1919.9 MiB)
20:33:55.355 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:33:55.355 INFO  DAGScheduler - Got job 208 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:33:55.355 INFO  DAGScheduler - Final stage: ResultStage 272 (collect at ReadsSparkSourceUnitTest.java:112)
20:33:55.355 INFO  DAGScheduler - Parents of final stage: List()
20:33:55.355 INFO  DAGScheduler - Missing parents: List()
20:33:55.355 INFO  DAGScheduler - Submitting ResultStage 272 (MapPartitionsRDD[1314] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:55.370 INFO  MemoryStore - Block broadcast_553 stored as values in memory (estimated size 286.8 KiB, free 1919.4 MiB)
20:33:55.371 INFO  MemoryStore - Block broadcast_553_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1919.3 MiB)
20:33:55.371 INFO  BlockManagerInfo - Added broadcast_553_piece0 in memory on localhost:45281 (size: 103.6 KiB, free: 1919.8 MiB)
20:33:55.371 INFO  SparkContext - Created broadcast 553 from broadcast at DAGScheduler.scala:1580
20:33:55.371 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 272 (MapPartitionsRDD[1314] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:33:55.371 INFO  TaskSchedulerImpl - Adding task set 272.0 with 1 tasks resource profile 0
20:33:55.372 INFO  TaskSetManager - Starting task 0.0 in stage 272.0 (TID 340) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes) 
20:33:55.372 INFO  Executor - Running task 0.0 in stage 272.0 (TID 340)
20:33:55.396 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
20:33:55.403 INFO  Executor - Finished task 0.0 in stage 272.0 (TID 340). 154058 bytes result sent to driver
20:33:55.404 INFO  TaskSetManager - Finished task 0.0 in stage 272.0 (TID 340) in 32 ms on localhost (executor driver) (1/1)
20:33:55.404 INFO  TaskSchedulerImpl - Removed TaskSet 272.0, whose tasks have all completed, from pool 
20:33:55.404 INFO  DAGScheduler - ResultStage 272 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.049 s
20:33:55.404 INFO  DAGScheduler - Job 208 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:55.404 INFO  TaskSchedulerImpl - Killing all running tasks in stage 272: Stage finished
20:33:55.404 INFO  DAGScheduler - Job 208 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.049615 s
20:33:55.407 INFO  MemoryStore - Block broadcast_554 stored as values in memory (estimated size 608.0 B, free 1919.3 MiB)
20:33:55.408 INFO  MemoryStore - Block broadcast_554_piece0 stored as bytes in memory (estimated size 206.0 B, free 1919.3 MiB)
20:33:55.408 INFO  BlockManagerInfo - Added broadcast_554_piece0 in memory on localhost:45281 (size: 206.0 B, free: 1919.8 MiB)
20:33:55.408 INFO  SparkContext - Created broadcast 554 from broadcast at CramSource.java:114
20:33:55.409 INFO  MemoryStore - Block broadcast_555 stored as values in memory (estimated size 297.9 KiB, free 1919.0 MiB)
20:33:55.415 INFO  MemoryStore - Block broadcast_555_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.9 MiB)
20:33:55.415 INFO  BlockManagerInfo - Added broadcast_555_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.8 MiB)
20:33:55.416 INFO  SparkContext - Created broadcast 555 from newAPIHadoopFile at PathSplitSource.java:96
20:33:55.434 INFO  MemoryStore - Block broadcast_556 stored as values in memory (estimated size 608.0 B, free 1918.9 MiB)
20:33:55.434 INFO  MemoryStore - Block broadcast_556_piece0 stored as bytes in memory (estimated size 206.0 B, free 1918.9 MiB)
20:33:55.434 INFO  BlockManagerInfo - Added broadcast_556_piece0 in memory on localhost:45281 (size: 206.0 B, free: 1919.8 MiB)
20:33:55.434 INFO  SparkContext - Created broadcast 556 from broadcast at CramSource.java:114
20:33:55.435 INFO  MemoryStore - Block broadcast_557 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
20:33:55.442 INFO  MemoryStore - Block broadcast_557_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
20:33:55.442 INFO  BlockManagerInfo - Added broadcast_557_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.7 MiB)
20:33:55.442 INFO  SparkContext - Created broadcast 557 from newAPIHadoopFile at PathSplitSource.java:96
20:33:55.459 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:33:55.460 INFO  DAGScheduler - Got job 209 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:33:55.460 INFO  DAGScheduler - Final stage: ResultStage 273 (collect at ReadsSparkSourceUnitTest.java:111)
20:33:55.460 INFO  DAGScheduler - Parents of final stage: List()
20:33:55.460 INFO  DAGScheduler - Missing parents: List()
20:33:55.460 INFO  DAGScheduler - Submitting ResultStage 273 (ParallelCollectionRDD[1319] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:33:55.460 INFO  MemoryStore - Block broadcast_558 stored as values in memory (estimated size 3.0 KiB, free 1918.6 MiB)
20:33:55.461 INFO  MemoryStore - Block broadcast_558_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.6 MiB)
20:33:55.461 INFO  BlockManagerInfo - Added broadcast_558_piece0 in memory on localhost:45281 (size: 1778.0 B, free: 1919.7 MiB)
20:33:55.461 INFO  SparkContext - Created broadcast 558 from broadcast at DAGScheduler.scala:1580
20:33:55.461 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 273 (ParallelCollectionRDD[1319] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:33:55.461 INFO  TaskSchedulerImpl - Adding task set 273.0 with 4 tasks resource profile 0
20:33:55.462 INFO  TaskSetManager - Starting task 0.0 in stage 273.0 (TID 341) (localhost, executor driver, partition 0, PROCESS_LOCAL, 8303 bytes) 
20:33:55.462 INFO  TaskSetManager - Starting task 1.0 in stage 273.0 (TID 342) (localhost, executor driver, partition 1, PROCESS_LOCAL, 8292 bytes) 
20:33:55.462 INFO  TaskSetManager - Starting task 2.0 in stage 273.0 (TID 343) (localhost, executor driver, partition 2, PROCESS_LOCAL, 8292 bytes) 
20:33:55.462 INFO  TaskSetManager - Starting task 3.0 in stage 273.0 (TID 344) (localhost, executor driver, partition 3, PROCESS_LOCAL, 8184 bytes) 
20:33:55.462 INFO  Executor - Running task 1.0 in stage 273.0 (TID 342)
20:33:55.462 INFO  Executor - Running task 0.0 in stage 273.0 (TID 341)
20:33:55.462 INFO  Executor - Running task 3.0 in stage 273.0 (TID 344)
20:33:55.462 INFO  Executor - Running task 2.0 in stage 273.0 (TID 343)
20:33:55.464 INFO  Executor - Finished task 1.0 in stage 273.0 (TID 342). 1524 bytes result sent to driver
20:33:55.464 INFO  Executor - Finished task 2.0 in stage 273.0 (TID 343). 1524 bytes result sent to driver
20:33:55.464 INFO  Executor - Finished task 3.0 in stage 273.0 (TID 344). 1416 bytes result sent to driver
20:33:55.464 INFO  Executor - Finished task 0.0 in stage 273.0 (TID 341). 1535 bytes result sent to driver
20:33:55.464 INFO  TaskSetManager - Finished task 1.0 in stage 273.0 (TID 342) in 2 ms on localhost (executor driver) (1/4)
20:33:55.464 INFO  TaskSetManager - Finished task 2.0 in stage 273.0 (TID 343) in 2 ms on localhost (executor driver) (2/4)
20:33:55.465 INFO  TaskSetManager - Finished task 3.0 in stage 273.0 (TID 344) in 3 ms on localhost (executor driver) (3/4)
20:33:55.465 INFO  TaskSetManager - Finished task 0.0 in stage 273.0 (TID 341) in 4 ms on localhost (executor driver) (4/4)
20:33:55.465 INFO  TaskSchedulerImpl - Removed TaskSet 273.0, whose tasks have all completed, from pool 
20:33:55.465 INFO  DAGScheduler - ResultStage 273 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.005 s
20:33:55.465 INFO  DAGScheduler - Job 209 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:55.465 INFO  TaskSchedulerImpl - Killing all running tasks in stage 273: Stage finished
20:33:55.465 INFO  DAGScheduler - Job 209 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.005762 s
20:33:55.466 INFO  FileInputFormat - Total input files to process : 1
20:33:55.494 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:33:55.494 INFO  DAGScheduler - Got job 210 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:33:55.494 INFO  DAGScheduler - Final stage: ResultStage 274 (collect at ReadsSparkSourceUnitTest.java:112)
20:33:55.494 INFO  DAGScheduler - Parents of final stage: List()
20:33:55.494 INFO  DAGScheduler - Missing parents: List()
20:33:55.494 INFO  DAGScheduler - Submitting ResultStage 274 (MapPartitionsRDD[1325] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:55.506 INFO  MemoryStore - Block broadcast_559 stored as values in memory (estimated size 286.8 KiB, free 1918.3 MiB)
20:33:55.507 INFO  MemoryStore - Block broadcast_559_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.2 MiB)
20:33:55.507 INFO  BlockManagerInfo - Added broadcast_559_piece0 in memory on localhost:45281 (size: 103.6 KiB, free: 1919.6 MiB)
20:33:55.507 INFO  SparkContext - Created broadcast 559 from broadcast at DAGScheduler.scala:1580
20:33:55.507 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 274 (MapPartitionsRDD[1325] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:33:55.507 INFO  TaskSchedulerImpl - Adding task set 274.0 with 1 tasks resource profile 0
20:33:55.508 INFO  TaskSetManager - Starting task 0.0 in stage 274.0 (TID 345) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7851 bytes) 
20:33:55.508 INFO  Executor - Running task 0.0 in stage 274.0 (TID 345)
20:33:55.528 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.cram:0+13330
20:33:55.532 INFO  Executor - Finished task 0.0 in stage 274.0 (TID 345). 3736 bytes result sent to driver
20:33:55.532 INFO  TaskSetManager - Finished task 0.0 in stage 274.0 (TID 345) in 25 ms on localhost (executor driver) (1/1)
20:33:55.532 INFO  TaskSchedulerImpl - Removed TaskSet 274.0, whose tasks have all completed, from pool 
20:33:55.532 INFO  DAGScheduler - ResultStage 274 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.037 s
20:33:55.532 INFO  DAGScheduler - Job 210 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:55.532 INFO  TaskSchedulerImpl - Killing all running tasks in stage 274: Stage finished
20:33:55.532 INFO  DAGScheduler - Job 210 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.038412 s
20:33:55.535 INFO  MemoryStore - Block broadcast_560 stored as values in memory (estimated size 297.9 KiB, free 1917.9 MiB)
20:33:55.541 INFO  MemoryStore - Block broadcast_560_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.9 MiB)
20:33:55.541 INFO  BlockManagerInfo - Added broadcast_560_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.6 MiB)
20:33:55.541 INFO  SparkContext - Created broadcast 560 from newAPIHadoopFile at PathSplitSource.java:96
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
20:33:55.565 INFO  MemoryStore - Block broadcast_561 stored as values in memory (estimated size 297.9 KiB, free 1917.6 MiB)
20:33:55.571 INFO  MemoryStore - Block broadcast_561_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.5 MiB)
20:33:55.571 INFO  BlockManagerInfo - Added broadcast_561_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.5 MiB)
20:33:55.571 INFO  SparkContext - Created broadcast 561 from newAPIHadoopFile at PathSplitSource.java:96
20:33:55.594 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:33:55.594 INFO  DAGScheduler - Got job 211 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:33:55.594 INFO  DAGScheduler - Final stage: ResultStage 275 (collect at ReadsSparkSourceUnitTest.java:111)
20:33:55.594 INFO  DAGScheduler - Parents of final stage: List()
20:33:55.594 INFO  DAGScheduler - Missing parents: List()
20:33:55.595 INFO  DAGScheduler - Submitting ResultStage 275 (ParallelCollectionRDD[1331] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:33:55.595 INFO  MemoryStore - Block broadcast_562 stored as values in memory (estimated size 3.0 KiB, free 1917.5 MiB)
20:33:55.595 INFO  MemoryStore - Block broadcast_562_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1917.5 MiB)
20:33:55.595 INFO  BlockManagerInfo - Added broadcast_562_piece0 in memory on localhost:45281 (size: 1778.0 B, free: 1919.5 MiB)
20:33:55.596 INFO  SparkContext - Created broadcast 562 from broadcast at DAGScheduler.scala:1580
20:33:55.596 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 275 (ParallelCollectionRDD[1331] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:33:55.596 INFO  TaskSchedulerImpl - Adding task set 275.0 with 4 tasks resource profile 0
20:33:55.596 INFO  TaskSetManager - Starting task 0.0 in stage 275.0 (TID 346) (localhost, executor driver, partition 0, PROCESS_LOCAL, 47080 bytes) 
20:33:55.597 INFO  TaskSetManager - Starting task 1.0 in stage 275.0 (TID 347) (localhost, executor driver, partition 1, PROCESS_LOCAL, 47194 bytes) 
20:33:55.597 INFO  TaskSetManager - Starting task 2.0 in stage 275.0 (TID 348) (localhost, executor driver, partition 2, PROCESS_LOCAL, 47105 bytes) 
20:33:55.597 INFO  TaskSetManager - Starting task 3.0 in stage 275.0 (TID 349) (localhost, executor driver, partition 3, PROCESS_LOCAL, 47530 bytes) 
20:33:55.597 INFO  Executor - Running task 1.0 in stage 275.0 (TID 347)
20:33:55.597 INFO  Executor - Running task 0.0 in stage 275.0 (TID 346)
20:33:55.597 INFO  Executor - Running task 3.0 in stage 275.0 (TID 349)
20:33:55.597 INFO  Executor - Running task 2.0 in stage 275.0 (TID 348)
20:33:55.603 INFO  Executor - Finished task 3.0 in stage 275.0 (TID 349). 40848 bytes result sent to driver
20:33:55.603 INFO  Executor - Finished task 0.0 in stage 275.0 (TID 346). 40398 bytes result sent to driver
20:33:55.607 INFO  Executor - Finished task 2.0 in stage 275.0 (TID 348). 40423 bytes result sent to driver
20:33:55.607 INFO  BlockManagerInfo - Removed broadcast_557_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.6 MiB)
20:33:55.607 INFO  Executor - Finished task 1.0 in stage 275.0 (TID 347). 40512 bytes result sent to driver
20:33:55.608 INFO  TaskSetManager - Finished task 0.0 in stage 275.0 (TID 346) in 12 ms on localhost (executor driver) (1/4)
20:33:55.608 INFO  TaskSetManager - Finished task 3.0 in stage 275.0 (TID 349) in 11 ms on localhost (executor driver) (2/4)
20:33:55.608 INFO  TaskSetManager - Finished task 1.0 in stage 275.0 (TID 347) in 12 ms on localhost (executor driver) (3/4)
20:33:55.609 INFO  TaskSetManager - Finished task 2.0 in stage 275.0 (TID 348) in 12 ms on localhost (executor driver) (4/4)
20:33:55.609 INFO  BlockManagerInfo - Removed broadcast_554_piece0 on localhost:45281 in memory (size: 206.0 B, free: 1919.6 MiB)
20:33:55.609 INFO  TaskSchedulerImpl - Removed TaskSet 275.0, whose tasks have all completed, from pool 
20:33:55.609 INFO  DAGScheduler - ResultStage 275 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.014 s
20:33:55.609 INFO  BlockManagerInfo - Removed broadcast_552_piece0 on localhost:45281 in memory (size: 1778.0 B, free: 1919.6 MiB)
20:33:55.609 INFO  DAGScheduler - Job 211 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:55.609 INFO  TaskSchedulerImpl - Killing all running tasks in stage 275: Stage finished
20:33:55.609 INFO  DAGScheduler - Job 211 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.015036 s
20:33:55.610 INFO  BlockManagerInfo - Removed broadcast_560_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.6 MiB)
20:33:55.610 INFO  BlockManagerInfo - Removed broadcast_558_piece0 on localhost:45281 in memory (size: 1778.0 B, free: 1919.6 MiB)
20:33:55.610 INFO  FileInputFormat - Total input files to process : 1
20:33:55.610 INFO  BlockManagerInfo - Removed broadcast_551_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.7 MiB)
20:33:55.611 INFO  BlockManagerInfo - Removed broadcast_553_piece0 on localhost:45281 in memory (size: 103.6 KiB, free: 1919.8 MiB)
20:33:55.611 INFO  BlockManagerInfo - Removed broadcast_555_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.8 MiB)
20:33:55.611 INFO  BlockManagerInfo - Removed broadcast_556_piece0 on localhost:45281 in memory (size: 206.0 B, free: 1919.8 MiB)
20:33:55.612 INFO  BlockManagerInfo - Removed broadcast_550_piece0 on localhost:45281 in memory (size: 187.0 B, free: 1919.8 MiB)
20:33:55.612 INFO  BlockManagerInfo - Removed broadcast_559_piece0 on localhost:45281 in memory (size: 103.6 KiB, free: 1919.9 MiB)
20:33:55.646 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:33:55.647 INFO  DAGScheduler - Got job 212 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:33:55.647 INFO  DAGScheduler - Final stage: ResultStage 276 (collect at ReadsSparkSourceUnitTest.java:112)
20:33:55.647 INFO  DAGScheduler - Parents of final stage: List()
20:33:55.647 INFO  DAGScheduler - Missing parents: List()
20:33:55.647 INFO  DAGScheduler - Submitting ResultStage 276 (MapPartitionsRDD[1338] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:55.671 INFO  MemoryStore - Block broadcast_563 stored as values in memory (estimated size 426.2 KiB, free 1919.2 MiB)
20:33:55.673 INFO  MemoryStore - Block broadcast_563_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1919.1 MiB)
20:33:55.673 INFO  BlockManagerInfo - Added broadcast_563_piece0 in memory on localhost:45281 (size: 153.7 KiB, free: 1919.8 MiB)
20:33:55.673 INFO  SparkContext - Created broadcast 563 from broadcast at DAGScheduler.scala:1580
20:33:55.673 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 276 (MapPartitionsRDD[1338] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:33:55.674 INFO  TaskSchedulerImpl - Adding task set 276.0 with 1 tasks resource profile 0
20:33:55.674 INFO  TaskSetManager - Starting task 0.0 in stage 276.0 (TID 350) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes) 
20:33:55.674 INFO  Executor - Running task 0.0 in stage 276.0 (TID 350)
20:33:55.706 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
20:33:55.712 INFO  Executor - Finished task 0.0 in stage 276.0 (TID 350). 159572 bytes result sent to driver
20:33:55.713 INFO  TaskSetManager - Finished task 0.0 in stage 276.0 (TID 350) in 39 ms on localhost (executor driver) (1/1)
20:33:55.713 INFO  TaskSchedulerImpl - Removed TaskSet 276.0, whose tasks have all completed, from pool 
20:33:55.713 INFO  DAGScheduler - ResultStage 276 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.066 s
20:33:55.713 INFO  DAGScheduler - Job 212 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:55.713 INFO  TaskSchedulerImpl - Killing all running tasks in stage 276: Stage finished
20:33:55.713 INFO  DAGScheduler - Job 212 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.066775 s
20:33:55.715 INFO  MemoryStore - Block broadcast_564 stored as values in memory (estimated size 536.0 B, free 1919.1 MiB)
20:33:55.716 INFO  MemoryStore - Block broadcast_564_piece0 stored as bytes in memory (estimated size 187.0 B, free 1919.1 MiB)
20:33:55.716 INFO  BlockManagerInfo - Added broadcast_564_piece0 in memory on localhost:45281 (size: 187.0 B, free: 1919.8 MiB)
20:33:55.716 INFO  SparkContext - Created broadcast 564 from broadcast at CramSource.java:114
20:33:55.717 INFO  MemoryStore - Block broadcast_565 stored as values in memory (estimated size 297.9 KiB, free 1918.8 MiB)
20:33:55.723 INFO  MemoryStore - Block broadcast_565_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.7 MiB)
20:33:55.723 INFO  BlockManagerInfo - Added broadcast_565_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.8 MiB)
20:33:55.724 INFO  SparkContext - Created broadcast 565 from newAPIHadoopFile at PathSplitSource.java:96
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
20:33:55.746 INFO  MemoryStore - Block broadcast_566 stored as values in memory (estimated size 536.0 B, free 1918.7 MiB)
20:33:55.747 INFO  MemoryStore - Block broadcast_566_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.7 MiB)
20:33:55.747 INFO  BlockManagerInfo - Added broadcast_566_piece0 in memory on localhost:45281 (size: 187.0 B, free: 1919.7 MiB)
20:33:55.747 INFO  SparkContext - Created broadcast 566 from broadcast at CramSource.java:114
20:33:55.748 INFO  MemoryStore - Block broadcast_567 stored as values in memory (estimated size 297.9 KiB, free 1918.5 MiB)
20:33:55.754 INFO  MemoryStore - Block broadcast_567_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.4 MiB)
20:33:55.754 INFO  BlockManagerInfo - Added broadcast_567_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.7 MiB)
20:33:55.754 INFO  SparkContext - Created broadcast 567 from newAPIHadoopFile at PathSplitSource.java:96
20:33:55.772 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:33:55.772 INFO  DAGScheduler - Got job 213 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:33:55.772 INFO  DAGScheduler - Final stage: ResultStage 277 (collect at ReadsSparkSourceUnitTest.java:111)
20:33:55.772 INFO  DAGScheduler - Parents of final stage: List()
20:33:55.772 INFO  DAGScheduler - Missing parents: List()
20:33:55.772 INFO  DAGScheduler - Submitting ResultStage 277 (ParallelCollectionRDD[1343] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:33:55.772 INFO  MemoryStore - Block broadcast_568 stored as values in memory (estimated size 3.0 KiB, free 1918.4 MiB)
20:33:55.773 INFO  MemoryStore - Block broadcast_568_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.4 MiB)
20:33:55.773 INFO  BlockManagerInfo - Added broadcast_568_piece0 in memory on localhost:45281 (size: 1778.0 B, free: 1919.7 MiB)
20:33:55.773 INFO  SparkContext - Created broadcast 568 from broadcast at DAGScheduler.scala:1580
20:33:55.773 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 277 (ParallelCollectionRDD[1343] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:33:55.773 INFO  TaskSchedulerImpl - Adding task set 277.0 with 4 tasks resource profile 0
20:33:55.774 INFO  TaskSetManager - Starting task 0.0 in stage 277.0 (TID 351) (localhost, executor driver, partition 0, PROCESS_LOCAL, 45675 bytes) 
20:33:55.774 INFO  TaskSetManager - Starting task 1.0 in stage 277.0 (TID 352) (localhost, executor driver, partition 1, PROCESS_LOCAL, 45855 bytes) 
20:33:55.775 INFO  TaskSetManager - Starting task 2.0 in stage 277.0 (TID 353) (localhost, executor driver, partition 2, PROCESS_LOCAL, 45795 bytes) 
20:33:55.775 INFO  TaskSetManager - Starting task 3.0 in stage 277.0 (TID 354) (localhost, executor driver, partition 3, PROCESS_LOCAL, 46068 bytes) 
20:33:55.775 INFO  Executor - Running task 0.0 in stage 277.0 (TID 351)
20:33:55.775 INFO  Executor - Running task 2.0 in stage 277.0 (TID 353)
20:33:55.775 INFO  Executor - Running task 3.0 in stage 277.0 (TID 354)
20:33:55.776 INFO  Executor - Running task 1.0 in stage 277.0 (TID 352)
20:33:55.777 INFO  Executor - Finished task 0.0 in stage 277.0 (TID 351). 38907 bytes result sent to driver
20:33:55.777 INFO  Executor - Finished task 1.0 in stage 277.0 (TID 352). 39087 bytes result sent to driver
20:33:55.777 INFO  TaskSetManager - Finished task 0.0 in stage 277.0 (TID 351) in 3 ms on localhost (executor driver) (1/4)
20:33:55.777 INFO  Executor - Finished task 2.0 in stage 277.0 (TID 353). 39070 bytes result sent to driver
20:33:55.777 INFO  Executor - Finished task 3.0 in stage 277.0 (TID 354). 39343 bytes result sent to driver
20:33:55.778 INFO  TaskSetManager - Finished task 2.0 in stage 277.0 (TID 353) in 4 ms on localhost (executor driver) (2/4)
20:33:55.778 INFO  TaskSetManager - Finished task 3.0 in stage 277.0 (TID 354) in 3 ms on localhost (executor driver) (3/4)
20:33:55.778 INFO  TaskSetManager - Finished task 1.0 in stage 277.0 (TID 352) in 4 ms on localhost (executor driver) (4/4)
20:33:55.778 INFO  TaskSchedulerImpl - Removed TaskSet 277.0, whose tasks have all completed, from pool 
20:33:55.778 INFO  DAGScheduler - ResultStage 277 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.006 s
20:33:55.778 INFO  DAGScheduler - Job 213 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:55.778 INFO  TaskSchedulerImpl - Killing all running tasks in stage 277: Stage finished
20:33:55.779 INFO  DAGScheduler - Job 213 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.006899 s
20:33:55.779 INFO  FileInputFormat - Total input files to process : 1
20:33:55.805 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:33:55.805 INFO  DAGScheduler - Got job 214 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:33:55.805 INFO  DAGScheduler - Final stage: ResultStage 278 (collect at ReadsSparkSourceUnitTest.java:112)
20:33:55.805 INFO  DAGScheduler - Parents of final stage: List()
20:33:55.805 INFO  DAGScheduler - Missing parents: List()
20:33:55.806 INFO  DAGScheduler - Submitting ResultStage 278 (MapPartitionsRDD[1349] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:55.825 INFO  MemoryStore - Block broadcast_569 stored as values in memory (estimated size 286.8 KiB, free 1918.1 MiB)
20:33:55.827 INFO  MemoryStore - Block broadcast_569_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.0 MiB)
20:33:55.827 INFO  BlockManagerInfo - Added broadcast_569_piece0 in memory on localhost:45281 (size: 103.6 KiB, free: 1919.6 MiB)
20:33:55.827 INFO  SparkContext - Created broadcast 569 from broadcast at DAGScheduler.scala:1580
20:33:55.827 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 278 (MapPartitionsRDD[1349] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:33:55.827 INFO  TaskSchedulerImpl - Adding task set 278.0 with 1 tasks resource profile 0
20:33:55.828 INFO  TaskSetManager - Starting task 0.0 in stage 278.0 (TID 355) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes) 
20:33:55.828 INFO  Executor - Running task 0.0 in stage 278.0 (TID 355)
20:33:55.854 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
20:33:55.862 INFO  Executor - Finished task 0.0 in stage 278.0 (TID 355). 154101 bytes result sent to driver
20:33:55.863 INFO  TaskSetManager - Finished task 0.0 in stage 278.0 (TID 355) in 36 ms on localhost (executor driver) (1/1)
20:33:55.863 INFO  TaskSchedulerImpl - Removed TaskSet 278.0, whose tasks have all completed, from pool 
20:33:55.863 INFO  DAGScheduler - ResultStage 278 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.057 s
20:33:55.863 INFO  DAGScheduler - Job 214 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:55.863 INFO  TaskSchedulerImpl - Killing all running tasks in stage 278: Stage finished
20:33:55.863 INFO  DAGScheduler - Job 214 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.058451 s
20:33:55.867 INFO  MemoryStore - Block broadcast_570 stored as values in memory (estimated size 297.9 KiB, free 1917.7 MiB)
20:33:55.876 INFO  MemoryStore - Block broadcast_570_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.7 MiB)
20:33:55.876 INFO  BlockManagerInfo - Added broadcast_570_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.5 MiB)
20:33:55.877 INFO  SparkContext - Created broadcast 570 from newAPIHadoopFile at PathSplitSource.java:96
20:33:55.899 INFO  MemoryStore - Block broadcast_571 stored as values in memory (estimated size 297.9 KiB, free 1917.4 MiB)
20:33:55.905 INFO  MemoryStore - Block broadcast_571_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.3 MiB)
20:33:55.905 INFO  BlockManagerInfo - Added broadcast_571_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.5 MiB)
20:33:55.905 INFO  SparkContext - Created broadcast 571 from newAPIHadoopFile at PathSplitSource.java:96
20:33:55.928 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:33:55.929 INFO  DAGScheduler - Got job 215 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:33:55.929 INFO  DAGScheduler - Final stage: ResultStage 279 (collect at ReadsSparkSourceUnitTest.java:111)
20:33:55.929 INFO  DAGScheduler - Parents of final stage: List()
20:33:55.929 INFO  DAGScheduler - Missing parents: List()
20:33:55.929 INFO  DAGScheduler - Submitting ResultStage 279 (ParallelCollectionRDD[1355] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:33:55.929 INFO  MemoryStore - Block broadcast_572 stored as values in memory (estimated size 3.0 KiB, free 1917.3 MiB)
20:33:55.930 INFO  MemoryStore - Block broadcast_572_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1917.3 MiB)
20:33:55.930 INFO  BlockManagerInfo - Added broadcast_572_piece0 in memory on localhost:45281 (size: 1778.0 B, free: 1919.5 MiB)
20:33:55.930 INFO  SparkContext - Created broadcast 572 from broadcast at DAGScheduler.scala:1580
20:33:55.930 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 279 (ParallelCollectionRDD[1355] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:33:55.930 INFO  TaskSchedulerImpl - Adding task set 279.0 with 4 tasks resource profile 0
20:33:55.931 INFO  TaskSetManager - Starting task 0.0 in stage 279.0 (TID 356) (localhost, executor driver, partition 0, PROCESS_LOCAL, 47080 bytes) 
20:33:55.931 INFO  TaskSetManager - Starting task 1.0 in stage 279.0 (TID 357) (localhost, executor driver, partition 1, PROCESS_LOCAL, 47194 bytes) 
20:33:55.931 INFO  TaskSetManager - Starting task 2.0 in stage 279.0 (TID 358) (localhost, executor driver, partition 2, PROCESS_LOCAL, 47105 bytes) 
20:33:55.931 INFO  TaskSetManager - Starting task 3.0 in stage 279.0 (TID 359) (localhost, executor driver, partition 3, PROCESS_LOCAL, 47530 bytes) 
20:33:55.931 INFO  Executor - Running task 0.0 in stage 279.0 (TID 356)
20:33:55.931 INFO  Executor - Running task 2.0 in stage 279.0 (TID 358)
20:33:55.931 INFO  Executor - Running task 3.0 in stage 279.0 (TID 359)
20:33:55.932 INFO  Executor - Running task 1.0 in stage 279.0 (TID 357)
20:33:55.937 INFO  Executor - Finished task 2.0 in stage 279.0 (TID 358). 40423 bytes result sent to driver
20:33:55.938 INFO  Executor - Finished task 3.0 in stage 279.0 (TID 359). 40848 bytes result sent to driver
20:33:55.938 INFO  Executor - Finished task 1.0 in stage 279.0 (TID 357). 40512 bytes result sent to driver
20:33:55.938 INFO  Executor - Finished task 0.0 in stage 279.0 (TID 356). 40398 bytes result sent to driver
20:33:55.939 INFO  TaskSetManager - Finished task 2.0 in stage 279.0 (TID 358) in 8 ms on localhost (executor driver) (1/4)
20:33:55.939 INFO  TaskSetManager - Finished task 0.0 in stage 279.0 (TID 356) in 9 ms on localhost (executor driver) (2/4)
20:33:55.939 INFO  TaskSetManager - Finished task 1.0 in stage 279.0 (TID 357) in 8 ms on localhost (executor driver) (3/4)
20:33:55.939 INFO  BlockManagerInfo - Removed broadcast_562_piece0 on localhost:45281 in memory (size: 1778.0 B, free: 1919.5 MiB)
20:33:55.939 INFO  TaskSetManager - Finished task 3.0 in stage 279.0 (TID 359) in 8 ms on localhost (executor driver) (4/4)
20:33:55.939 INFO  BlockManagerInfo - Removed broadcast_561_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.5 MiB)
20:33:55.939 INFO  DAGScheduler - ResultStage 279 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.010 s
20:33:55.940 INFO  DAGScheduler - Job 215 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:55.940 INFO  TaskSchedulerImpl - Removed TaskSet 279.0, whose tasks have all completed, from pool 
20:33:55.940 INFO  TaskSchedulerImpl - Killing all running tasks in stage 279: Stage finished
20:33:55.940 INFO  DAGScheduler - Job 215 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.011587 s
20:33:55.940 INFO  BlockManagerInfo - Removed broadcast_563_piece0 on localhost:45281 in memory (size: 153.7 KiB, free: 1919.7 MiB)
20:33:55.940 INFO  BlockManagerInfo - Removed broadcast_569_piece0 on localhost:45281 in memory (size: 103.6 KiB, free: 1919.8 MiB)
20:33:55.941 INFO  FileInputFormat - Total input files to process : 1
20:33:55.941 INFO  BlockManagerInfo - Removed broadcast_568_piece0 on localhost:45281 in memory (size: 1778.0 B, free: 1919.8 MiB)
20:33:55.941 INFO  BlockManagerInfo - Removed broadcast_570_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.9 MiB)
20:33:55.942 INFO  BlockManagerInfo - Removed broadcast_564_piece0 on localhost:45281 in memory (size: 187.0 B, free: 1919.9 MiB)
20:33:55.942 INFO  BlockManagerInfo - Removed broadcast_567_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.9 MiB)
20:33:55.942 INFO  BlockManagerInfo - Removed broadcast_565_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.9 MiB)
20:33:55.943 INFO  BlockManagerInfo - Removed broadcast_566_piece0 on localhost:45281 in memory (size: 187.0 B, free: 1919.9 MiB)
20:33:55.978 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:33:55.978 INFO  DAGScheduler - Got job 216 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:33:55.978 INFO  DAGScheduler - Final stage: ResultStage 280 (collect at ReadsSparkSourceUnitTest.java:112)
20:33:55.978 INFO  DAGScheduler - Parents of final stage: List()
20:33:55.978 INFO  DAGScheduler - Missing parents: List()
20:33:55.978 INFO  DAGScheduler - Submitting ResultStage 280 (MapPartitionsRDD[1362] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:55.995 INFO  MemoryStore - Block broadcast_573 stored as values in memory (estimated size 426.2 KiB, free 1919.2 MiB)
20:33:55.996 INFO  MemoryStore - Block broadcast_573_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1919.1 MiB)
20:33:55.996 INFO  BlockManagerInfo - Added broadcast_573_piece0 in memory on localhost:45281 (size: 153.7 KiB, free: 1919.8 MiB)
20:33:55.996 INFO  SparkContext - Created broadcast 573 from broadcast at DAGScheduler.scala:1580
20:33:55.997 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 280 (MapPartitionsRDD[1362] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:33:55.997 INFO  TaskSchedulerImpl - Adding task set 280.0 with 1 tasks resource profile 0
20:33:55.997 INFO  TaskSetManager - Starting task 0.0 in stage 280.0 (TID 360) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes) 
20:33:55.997 INFO  Executor - Running task 0.0 in stage 280.0 (TID 360)
20:33:56.028 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
20:33:56.031 INFO  Executor - Finished task 0.0 in stage 280.0 (TID 360). 159572 bytes result sent to driver
20:33:56.032 INFO  TaskSetManager - Finished task 0.0 in stage 280.0 (TID 360) in 35 ms on localhost (executor driver) (1/1)
20:33:56.032 INFO  TaskSchedulerImpl - Removed TaskSet 280.0, whose tasks have all completed, from pool 
20:33:56.032 INFO  DAGScheduler - ResultStage 280 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.053 s
20:33:56.032 INFO  DAGScheduler - Job 216 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:56.032 INFO  TaskSchedulerImpl - Killing all running tasks in stage 280: Stage finished
20:33:56.032 INFO  DAGScheduler - Job 216 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.054084 s
20:33:56.035 INFO  MemoryStore - Block broadcast_574 stored as values in memory (estimated size 536.0 B, free 1919.1 MiB)
20:33:56.035 INFO  MemoryStore - Block broadcast_574_piece0 stored as bytes in memory (estimated size 187.0 B, free 1919.1 MiB)
20:33:56.035 INFO  BlockManagerInfo - Added broadcast_574_piece0 in memory on localhost:45281 (size: 187.0 B, free: 1919.8 MiB)
20:33:56.036 INFO  SparkContext - Created broadcast 574 from broadcast at CramSource.java:114
20:33:56.037 INFO  MemoryStore - Block broadcast_575 stored as values in memory (estimated size 297.9 KiB, free 1918.8 MiB)
20:33:56.047 INFO  MemoryStore - Block broadcast_575_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.7 MiB)
20:33:56.047 INFO  BlockManagerInfo - Added broadcast_575_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.8 MiB)
20:33:56.048 INFO  SparkContext - Created broadcast 575 from newAPIHadoopFile at PathSplitSource.java:96
20:33:56.070 INFO  MemoryStore - Block broadcast_576 stored as values in memory (estimated size 536.0 B, free 1918.7 MiB)
20:33:56.071 INFO  MemoryStore - Block broadcast_576_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.7 MiB)
20:33:56.071 INFO  BlockManagerInfo - Added broadcast_576_piece0 in memory on localhost:45281 (size: 187.0 B, free: 1919.7 MiB)
20:33:56.071 INFO  SparkContext - Created broadcast 576 from broadcast at CramSource.java:114
20:33:56.072 INFO  MemoryStore - Block broadcast_577 stored as values in memory (estimated size 297.9 KiB, free 1918.5 MiB)
20:33:56.078 INFO  MemoryStore - Block broadcast_577_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.4 MiB)
20:33:56.079 INFO  BlockManagerInfo - Added broadcast_577_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.7 MiB)
20:33:56.079 INFO  SparkContext - Created broadcast 577 from newAPIHadoopFile at PathSplitSource.java:96
20:33:56.096 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:33:56.096 INFO  DAGScheduler - Got job 217 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:33:56.096 INFO  DAGScheduler - Final stage: ResultStage 281 (collect at ReadsSparkSourceUnitTest.java:111)
20:33:56.096 INFO  DAGScheduler - Parents of final stage: List()
20:33:56.097 INFO  DAGScheduler - Missing parents: List()
20:33:56.097 INFO  DAGScheduler - Submitting ResultStage 281 (ParallelCollectionRDD[1367] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:33:56.097 INFO  MemoryStore - Block broadcast_578 stored as values in memory (estimated size 3.0 KiB, free 1918.4 MiB)
20:33:56.097 INFO  MemoryStore - Block broadcast_578_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.4 MiB)
20:33:56.098 INFO  BlockManagerInfo - Added broadcast_578_piece0 in memory on localhost:45281 (size: 1778.0 B, free: 1919.7 MiB)
20:33:56.098 INFO  SparkContext - Created broadcast 578 from broadcast at DAGScheduler.scala:1580
20:33:56.098 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 281 (ParallelCollectionRDD[1367] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:33:56.098 INFO  TaskSchedulerImpl - Adding task set 281.0 with 4 tasks resource profile 0
20:33:56.099 INFO  TaskSetManager - Starting task 0.0 in stage 281.0 (TID 361) (localhost, executor driver, partition 0, PROCESS_LOCAL, 45675 bytes) 
20:33:56.099 INFO  TaskSetManager - Starting task 1.0 in stage 281.0 (TID 362) (localhost, executor driver, partition 1, PROCESS_LOCAL, 45855 bytes) 
20:33:56.099 INFO  TaskSetManager - Starting task 2.0 in stage 281.0 (TID 363) (localhost, executor driver, partition 2, PROCESS_LOCAL, 45795 bytes) 
20:33:56.100 INFO  TaskSetManager - Starting task 3.0 in stage 281.0 (TID 364) (localhost, executor driver, partition 3, PROCESS_LOCAL, 46068 bytes) 
20:33:56.100 INFO  Executor - Running task 3.0 in stage 281.0 (TID 364)
20:33:56.100 INFO  Executor - Running task 1.0 in stage 281.0 (TID 362)
20:33:56.100 INFO  Executor - Running task 0.0 in stage 281.0 (TID 361)
20:33:56.100 INFO  Executor - Running task 2.0 in stage 281.0 (TID 363)
20:33:56.102 INFO  Executor - Finished task 3.0 in stage 281.0 (TID 364). 39300 bytes result sent to driver
20:33:56.102 INFO  Executor - Finished task 0.0 in stage 281.0 (TID 361). 38950 bytes result sent to driver
20:33:56.102 INFO  Executor - Finished task 2.0 in stage 281.0 (TID 363). 39070 bytes result sent to driver
20:33:56.102 INFO  Executor - Finished task 1.0 in stage 281.0 (TID 362). 39087 bytes result sent to driver
20:33:56.102 INFO  TaskSetManager - Finished task 3.0 in stage 281.0 (TID 364) in 3 ms on localhost (executor driver) (1/4)
20:33:56.103 INFO  TaskSetManager - Finished task 2.0 in stage 281.0 (TID 363) in 4 ms on localhost (executor driver) (2/4)
20:33:56.103 INFO  TaskSetManager - Finished task 0.0 in stage 281.0 (TID 361) in 5 ms on localhost (executor driver) (3/4)
20:33:56.103 INFO  TaskSetManager - Finished task 1.0 in stage 281.0 (TID 362) in 4 ms on localhost (executor driver) (4/4)
20:33:56.103 INFO  TaskSchedulerImpl - Removed TaskSet 281.0, whose tasks have all completed, from pool 
20:33:56.103 INFO  DAGScheduler - ResultStage 281 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.006 s
20:33:56.103 INFO  DAGScheduler - Job 217 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:56.103 INFO  TaskSchedulerImpl - Killing all running tasks in stage 281: Stage finished
20:33:56.103 INFO  DAGScheduler - Job 217 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.006787 s
20:33:56.104 INFO  FileInputFormat - Total input files to process : 1
20:33:56.133 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:33:56.133 INFO  DAGScheduler - Got job 218 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:33:56.133 INFO  DAGScheduler - Final stage: ResultStage 282 (collect at ReadsSparkSourceUnitTest.java:112)
20:33:56.133 INFO  DAGScheduler - Parents of final stage: List()
20:33:56.133 INFO  DAGScheduler - Missing parents: List()
20:33:56.134 INFO  DAGScheduler - Submitting ResultStage 282 (MapPartitionsRDD[1373] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:56.145 INFO  MemoryStore - Block broadcast_579 stored as values in memory (estimated size 286.8 KiB, free 1918.1 MiB)
20:33:56.146 INFO  MemoryStore - Block broadcast_579_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.0 MiB)
20:33:56.146 INFO  BlockManagerInfo - Added broadcast_579_piece0 in memory on localhost:45281 (size: 103.6 KiB, free: 1919.6 MiB)
20:33:56.146 INFO  SparkContext - Created broadcast 579 from broadcast at DAGScheduler.scala:1580
20:33:56.147 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 282 (MapPartitionsRDD[1373] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:33:56.147 INFO  TaskSchedulerImpl - Adding task set 282.0 with 1 tasks resource profile 0
20:33:56.147 INFO  TaskSetManager - Starting task 0.0 in stage 282.0 (TID 365) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes) 
20:33:56.147 INFO  Executor - Running task 0.0 in stage 282.0 (TID 365)
20:33:56.169 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
20:33:56.179 INFO  Executor - Finished task 0.0 in stage 282.0 (TID 365). 154101 bytes result sent to driver
20:33:56.180 INFO  TaskSetManager - Finished task 0.0 in stage 282.0 (TID 365) in 33 ms on localhost (executor driver) (1/1)
20:33:56.180 INFO  TaskSchedulerImpl - Removed TaskSet 282.0, whose tasks have all completed, from pool 
20:33:56.180 INFO  DAGScheduler - ResultStage 282 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.046 s
20:33:56.180 INFO  DAGScheduler - Job 218 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:56.180 INFO  TaskSchedulerImpl - Killing all running tasks in stage 282: Stage finished
20:33:56.180 INFO  DAGScheduler - Job 218 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.046829 s
20:33:56.183 INFO  MemoryStore - Block broadcast_580 stored as values in memory (estimated size 297.9 KiB, free 1917.7 MiB)
20:33:56.194 INFO  MemoryStore - Block broadcast_580_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.7 MiB)
20:33:56.194 INFO  BlockManagerInfo - Added broadcast_580_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.5 MiB)
20:33:56.194 INFO  SparkContext - Created broadcast 580 from newAPIHadoopFile at PathSplitSource.java:96
20:33:56.223 INFO  MemoryStore - Block broadcast_581 stored as values in memory (estimated size 536.0 B, free 1917.7 MiB)
20:33:56.223 INFO  MemoryStore - Block broadcast_581_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.7 MiB)
20:33:56.223 INFO  BlockManagerInfo - Added broadcast_581_piece0 in memory on localhost:45281 (size: 187.0 B, free: 1919.5 MiB)
20:33:56.224 INFO  SparkContext - Created broadcast 581 from broadcast at CramSource.java:114
20:33:56.224 INFO  MemoryStore - Block broadcast_582 stored as values in memory (estimated size 297.9 KiB, free 1917.4 MiB)
20:33:56.231 INFO  MemoryStore - Block broadcast_582_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.3 MiB)
20:33:56.231 INFO  BlockManagerInfo - Added broadcast_582_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.5 MiB)
20:33:56.231 INFO  SparkContext - Created broadcast 582 from newAPIHadoopFile at PathSplitSource.java:96
20:33:56.253 WARN  FileSystem - Failed to initialize filesystem hdfs://bogus/path.bam: java.lang.IllegalArgumentException: java.net.UnknownHostException: bogus
20:33:56.256 INFO  MemoryStore - Block broadcast_583 stored as values in memory (estimated size 298.0 KiB, free 1917.1 MiB)
20:33:56.263 INFO  BlockManagerInfo - Removed broadcast_574_piece0 on localhost:45281 in memory (size: 187.0 B, free: 1919.5 MiB)
20:33:56.264 INFO  BlockManagerInfo - Removed broadcast_571_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.5 MiB)
20:33:56.264 INFO  BlockManagerInfo - Removed broadcast_580_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.6 MiB)
20:33:56.265 INFO  BlockManagerInfo - Removed broadcast_572_piece0 on localhost:45281 in memory (size: 1778.0 B, free: 1919.6 MiB)
20:33:56.265 INFO  BlockManagerInfo - Removed broadcast_578_piece0 on localhost:45281 in memory (size: 1778.0 B, free: 1919.6 MiB)
20:33:56.265 INFO  BlockManagerInfo - Removed broadcast_575_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.7 MiB)
20:33:56.266 INFO  BlockManagerInfo - Removed broadcast_576_piece0 on localhost:45281 in memory (size: 187.0 B, free: 1919.7 MiB)
20:33:56.266 INFO  BlockManagerInfo - Removed broadcast_579_piece0 on localhost:45281 in memory (size: 103.6 KiB, free: 1919.8 MiB)
20:33:56.267 INFO  BlockManagerInfo - Removed broadcast_577_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.8 MiB)
20:33:56.269 INFO  BlockManagerInfo - Removed broadcast_581_piece0 on localhost:45281 in memory (size: 187.0 B, free: 1919.8 MiB)
20:33:56.269 INFO  BlockManagerInfo - Removed broadcast_582_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.8 MiB)
20:33:56.270 INFO  BlockManagerInfo - Removed broadcast_573_piece0 on localhost:45281 in memory (size: 153.7 KiB, free: 1920.0 MiB)
20:33:56.271 INFO  MemoryStore - Block broadcast_583_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.7 MiB)
20:33:56.271 INFO  BlockManagerInfo - Added broadcast_583_piece0 in memory on localhost:45281 (size: 50.3 KiB, free: 1920.0 MiB)
20:33:56.271 INFO  SparkContext - Created broadcast 583 from newAPIHadoopFile at PathSplitSource.java:96
20:33:56.306 INFO  MemoryStore - Block broadcast_584 stored as values in memory (estimated size 298.0 KiB, free 1919.4 MiB)
20:33:56.312 INFO  MemoryStore - Block broadcast_584_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.3 MiB)
20:33:56.312 INFO  BlockManagerInfo - Added broadcast_584_piece0 in memory on localhost:45281 (size: 50.3 KiB, free: 1919.9 MiB)
20:33:56.312 INFO  SparkContext - Created broadcast 584 from newAPIHadoopFile at PathSplitSource.java:96
20:33:56.335 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:154
20:33:56.336 INFO  DAGScheduler - Got job 219 (collect at ReadsSparkSourceUnitTest.java:154) with 4 output partitions
20:33:56.336 INFO  DAGScheduler - Final stage: ResultStage 283 (collect at ReadsSparkSourceUnitTest.java:154)
20:33:56.336 INFO  DAGScheduler - Parents of final stage: List()
20:33:56.336 INFO  DAGScheduler - Missing parents: List()
20:33:56.336 INFO  DAGScheduler - Submitting ResultStage 283 (ParallelCollectionRDD[1388] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:33:56.336 INFO  MemoryStore - Block broadcast_585 stored as values in memory (estimated size 3.0 KiB, free 1919.3 MiB)
20:33:56.337 INFO  MemoryStore - Block broadcast_585_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1919.3 MiB)
20:33:56.337 INFO  BlockManagerInfo - Added broadcast_585_piece0 in memory on localhost:45281 (size: 1778.0 B, free: 1919.9 MiB)
20:33:56.337 INFO  SparkContext - Created broadcast 585 from broadcast at DAGScheduler.scala:1580
20:33:56.337 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 283 (ParallelCollectionRDD[1388] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:33:56.337 INFO  TaskSchedulerImpl - Adding task set 283.0 with 4 tasks resource profile 0
20:33:56.338 INFO  TaskSetManager - Starting task 0.0 in stage 283.0 (TID 366) (localhost, executor driver, partition 0, PROCESS_LOCAL, 169886 bytes) 
20:33:56.338 INFO  TaskSetManager - Starting task 1.0 in stage 283.0 (TID 367) (localhost, executor driver, partition 1, PROCESS_LOCAL, 170093 bytes) 
20:33:56.338 INFO  TaskSetManager - Starting task 2.0 in stage 283.0 (TID 368) (localhost, executor driver, partition 2, PROCESS_LOCAL, 170081 bytes) 
20:33:56.339 INFO  TaskSetManager - Starting task 3.0 in stage 283.0 (TID 369) (localhost, executor driver, partition 3, PROCESS_LOCAL, 169419 bytes) 
20:33:56.339 INFO  Executor - Running task 1.0 in stage 283.0 (TID 367)
20:33:56.339 INFO  Executor - Running task 0.0 in stage 283.0 (TID 366)
20:33:56.339 INFO  Executor - Running task 3.0 in stage 283.0 (TID 369)
20:33:56.339 INFO  Executor - Running task 2.0 in stage 283.0 (TID 368)
20:33:56.341 INFO  Executor - Finished task 2.0 in stage 283.0 (TID 368). 163356 bytes result sent to driver
20:33:56.341 INFO  Executor - Finished task 1.0 in stage 283.0 (TID 367). 163368 bytes result sent to driver
20:33:56.342 INFO  Executor - Finished task 3.0 in stage 283.0 (TID 369). 162694 bytes result sent to driver
20:33:56.343 INFO  Executor - Finished task 0.0 in stage 283.0 (TID 366). 163161 bytes result sent to driver
20:33:56.343 INFO  TaskSetManager - Finished task 2.0 in stage 283.0 (TID 368) in 5 ms on localhost (executor driver) (1/4)
20:33:56.343 INFO  TaskSetManager - Finished task 1.0 in stage 283.0 (TID 367) in 5 ms on localhost (executor driver) (2/4)
20:33:56.344 INFO  TaskSetManager - Finished task 3.0 in stage 283.0 (TID 369) in 6 ms on localhost (executor driver) (3/4)
20:33:56.344 INFO  TaskSetManager - Finished task 0.0 in stage 283.0 (TID 366) in 6 ms on localhost (executor driver) (4/4)
20:33:56.344 INFO  TaskSchedulerImpl - Removed TaskSet 283.0, whose tasks have all completed, from pool 
20:33:56.344 INFO  DAGScheduler - ResultStage 283 (collect at ReadsSparkSourceUnitTest.java:154) finished in 0.008 s
20:33:56.344 INFO  DAGScheduler - Job 219 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:56.344 INFO  TaskSchedulerImpl - Killing all running tasks in stage 283: Stage finished
20:33:56.344 INFO  DAGScheduler - Job 219 finished: collect at ReadsSparkSourceUnitTest.java:154, took 0.008649 s
20:33:56.353 INFO  FileInputFormat - Total input files to process : 2
20:33:56.390 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:155
20:33:56.390 INFO  DAGScheduler - Got job 220 (collect at ReadsSparkSourceUnitTest.java:155) with 2 output partitions
20:33:56.390 INFO  DAGScheduler - Final stage: ResultStage 284 (collect at ReadsSparkSourceUnitTest.java:155)
20:33:56.390 INFO  DAGScheduler - Parents of final stage: List()
20:33:56.390 INFO  DAGScheduler - Missing parents: List()
20:33:56.390 INFO  DAGScheduler - Submitting ResultStage 284 (MapPartitionsRDD[1395] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:56.407 INFO  MemoryStore - Block broadcast_586 stored as values in memory (estimated size 426.2 KiB, free 1918.9 MiB)
20:33:56.408 INFO  MemoryStore - Block broadcast_586_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1918.7 MiB)
20:33:56.408 INFO  BlockManagerInfo - Added broadcast_586_piece0 in memory on localhost:45281 (size: 153.7 KiB, free: 1919.8 MiB)
20:33:56.409 INFO  SparkContext - Created broadcast 586 from broadcast at DAGScheduler.scala:1580
20:33:56.409 INFO  DAGScheduler - Submitting 2 missing tasks from ResultStage 284 (MapPartitionsRDD[1395] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0, 1))
20:33:56.409 INFO  TaskSchedulerImpl - Adding task set 284.0 with 2 tasks resource profile 0
20:33:56.409 INFO  TaskSetManager - Starting task 0.0 in stage 284.0 (TID 370) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7928 bytes) 
20:33:56.409 INFO  TaskSetManager - Starting task 1.0 in stage 284.0 (TID 371) (localhost, executor driver, partition 1, PROCESS_LOCAL, 7928 bytes) 
20:33:56.410 INFO  Executor - Running task 1.0 in stage 284.0 (TID 371)
20:33:56.410 INFO  Executor - Running task 0.0 in stage 284.0 (TID 370)
20:33:56.446 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.sharded.bam/part-r-00000.bam:0+131967
20:33:56.446 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.sharded.bam/part-r-00001.bam:0+131967
20:33:56.455 INFO  Executor - Finished task 0.0 in stage 284.0 (TID 370). 325590 bytes result sent to driver
20:33:56.456 INFO  TaskSetManager - Finished task 0.0 in stage 284.0 (TID 370) in 47 ms on localhost (executor driver) (1/2)
20:33:56.459 INFO  Executor - Finished task 1.0 in stage 284.0 (TID 371). 325590 bytes result sent to driver
20:33:56.459 INFO  TaskSetManager - Finished task 1.0 in stage 284.0 (TID 371) in 50 ms on localhost (executor driver) (2/2)
20:33:56.459 INFO  TaskSchedulerImpl - Removed TaskSet 284.0, whose tasks have all completed, from pool 
20:33:56.460 INFO  DAGScheduler - ResultStage 284 (collect at ReadsSparkSourceUnitTest.java:155) finished in 0.068 s
20:33:56.460 INFO  DAGScheduler - Job 220 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:56.460 INFO  TaskSchedulerImpl - Killing all running tasks in stage 284: Stage finished
20:33:56.460 INFO  DAGScheduler - Job 220 finished: collect at ReadsSparkSourceUnitTest.java:155, took 0.069931 s
20:33:56.469 INFO  MiniDFSCluster - starting cluster: numNameNodes=1, numDataNodes=1
20:33:56.470 INFO  NameNode - Formatting using clusterid: testClusterID
20:33:56.470 INFO  FSEditLog - Edit logging is async:true
20:33:56.478 INFO  FSNamesystem - KeyProvider: null
20:33:56.478 INFO  FSNamesystem - fsLock is fair: true
20:33:56.478 INFO  FSNamesystem - Detailed lock hold time metrics enabled: false
20:33:56.478 INFO  FSNamesystem - fsOwner                = runner (auth:SIMPLE)
20:33:56.478 INFO  FSNamesystem - supergroup             = supergroup
20:33:56.478 INFO  FSNamesystem - isPermissionEnabled    = true
20:33:56.478 INFO  FSNamesystem - isStoragePolicyEnabled = true
20:33:56.478 INFO  FSNamesystem - HA Enabled: false
20:33:56.478 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:33:56.478 INFO  DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
20:33:56.478 INFO  DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
20:33:56.478 INFO  BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
20:33:56.479 INFO  BlockManager - The block deletion will start around 2025 Jul 15 20:33:56
20:33:56.479 INFO  GSet - Computing capacity for map BlocksMap
20:33:56.479 INFO  GSet - VM type       = 64-bit
20:33:56.479 INFO  GSet - 2.0% max memory 3.4 GB = 70 MB
20:33:56.479 INFO  GSet - capacity      = 2^23 = 8388608 entries
20:33:56.480 INFO  BlockManager - Storage policy satisfier is disabled
20:33:56.480 INFO  BlockManager - dfs.block.access.token.enable = false
20:33:56.480 INFO  BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
20:33:56.480 INFO  BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
20:33:56.480 INFO  BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
20:33:56.481 INFO  BlockManager - defaultReplication         = 1
20:33:56.481 INFO  BlockManager - maxReplication             = 512
20:33:56.481 INFO  BlockManager - minReplication             = 1
20:33:56.481 INFO  BlockManager - maxReplicationStreams      = 2
20:33:56.481 INFO  BlockManager - redundancyRecheckInterval  = 3000ms
20:33:56.481 INFO  BlockManager - encryptDataTransfer        = false
20:33:56.481 INFO  BlockManager - maxNumBlocksToLog          = 1000
20:33:56.481 INFO  GSet - Computing capacity for map INodeMap
20:33:56.481 INFO  GSet - VM type       = 64-bit
20:33:56.481 INFO  GSet - 1.0% max memory 3.4 GB = 35 MB
20:33:56.481 INFO  GSet - capacity      = 2^22 = 4194304 entries
20:33:56.486 INFO  FSDirectory - ACLs enabled? true
20:33:56.486 INFO  FSDirectory - POSIX ACL inheritance enabled? true
20:33:56.486 INFO  FSDirectory - XAttrs enabled? true
20:33:56.486 INFO  NameNode - Caching file names occurring more than 10 times
20:33:56.486 INFO  SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
20:33:56.486 INFO  SnapshotManager - SkipList is disabled
20:33:56.486 INFO  BlockManagerInfo - Removed broadcast_586_piece0 on localhost:45281 in memory (size: 153.7 KiB, free: 1919.9 MiB)
20:33:56.486 INFO  GSet - Computing capacity for map cachedBlocks
20:33:56.486 INFO  GSet - VM type       = 64-bit
20:33:56.487 INFO  GSet - 0.25% max memory 3.4 GB = 8.8 MB
20:33:56.487 INFO  GSet - capacity      = 2^20 = 1048576 entries
20:33:56.487 INFO  TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
20:33:56.487 INFO  TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
20:33:56.487 INFO  BlockManagerInfo - Removed broadcast_585_piece0 on localhost:45281 in memory (size: 1778.0 B, free: 1919.9 MiB)
20:33:56.487 INFO  TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
20:33:56.487 INFO  FSNamesystem - Retry cache on namenode is enabled
20:33:56.487 INFO  FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
20:33:56.487 INFO  GSet - Computing capacity for map NameNodeRetryCache
20:33:56.487 INFO  GSet - VM type       = 64-bit
20:33:56.487 INFO  GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
20:33:56.487 INFO  GSet - capacity      = 2^17 = 131072 entries
20:33:56.488 INFO  BlockManagerInfo - Removed broadcast_584_piece0 on localhost:45281 in memory (size: 50.3 KiB, free: 1920.0 MiB)
20:33:56.488 INFO  FSImage - Allocated new BlockPoolId: BP-582100319-10.1.0.111-1752611636488
20:33:56.488 INFO  BlockManagerInfo - Removed broadcast_583_piece0 on localhost:45281 in memory (size: 50.3 KiB, free: 1920.0 MiB)
20:33:56.490 INFO  Storage - Storage directory /tmp/minicluster_storage3005522765522698203/name-0-1 has been successfully formatted.
20:33:56.491 INFO  Storage - Storage directory /tmp/minicluster_storage3005522765522698203/name-0-2 has been successfully formatted.
20:33:56.502 INFO  FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage3005522765522698203/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression
20:33:56.502 INFO  FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage3005522765522698203/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression
20:33:56.505 INFO  FSImageFormatProtobuf - Image file /tmp/minicluster_storage3005522765522698203/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
20:33:56.505 INFO  FSImageFormatProtobuf - Image file /tmp/minicluster_storage3005522765522698203/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
20:33:56.506 INFO  NNStorageRetentionManager - Going to retain 1 images with txid >= 0
20:33:56.511 INFO  FSNamesystem - Stopping services started for active state
20:33:56.512 INFO  FSNamesystem - Stopping services started for standby state
20:33:56.512 INFO  NameNode - createNameNode []
20:33:56.513 WARN  MetricsConfig - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
20:33:56.514 INFO  MetricsSystemImpl - Scheduled Metric snapshot period at 10 second(s).
20:33:56.514 INFO  MetricsSystemImpl - NameNode metrics system started
20:33:56.515 INFO  NameNodeUtils - fs.defaultFS is hdfs://127.0.0.1:0
20:33:56.529 INFO  JvmPauseMonitor - Starting JVM pause monitor
20:33:56.529 INFO  DFSUtil - Filter initializers set : org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
20:33:56.529 INFO  DFSUtil - Starting Web-server for hdfs at: http://localhost:0
20:33:56.531 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
20:33:56.531 WARN  HttpRequestLog - Jetty request log can only be enabled using Log4j
20:33:56.532 INFO  HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
20:33:56.533 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
20:33:56.533 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
20:33:56.534 INFO  HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context hdfs
20:33:56.534 INFO  HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context static
20:33:56.535 INFO  HttpServer2 - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
20:33:56.535 INFO  HttpServer2 - Jetty bound to port 39525
20:33:56.535 INFO  Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
20:33:56.536 INFO  session - DefaultSessionIdManager workerName=node0
20:33:56.536 INFO  session - No SessionScavenger set, using defaults
20:33:56.536 INFO  session - node0 Scavenging every 660000ms
20:33:56.537 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
20:33:56.537 INFO  ContextHandler - Started o.e.j.s.ServletContextHandler@150a31a{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
20:33:56.629 INFO  ContextHandler - Started o.e.j.w.WebAppContext@30bd80bd{hdfs,/,file:///tmp/jetty-localhost-39525-hadoop-hdfs-3_3_6-tests_jar-_-any-2027081784397223895/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
20:33:56.629 INFO  AbstractConnector - Started ServerConnector@3a6f9780{HTTP/1.1, (http/1.1)}{localhost:39525}
20:33:56.629 INFO  Server - Started @72379ms
20:33:56.630 INFO  FSEditLog - Edit logging is async:true
20:33:56.638 INFO  FSNamesystem - KeyProvider: null
20:33:56.638 INFO  FSNamesystem - fsLock is fair: true
20:33:56.638 INFO  FSNamesystem - Detailed lock hold time metrics enabled: false
20:33:56.638 INFO  FSNamesystem - fsOwner                = runner (auth:SIMPLE)
20:33:56.638 INFO  FSNamesystem - supergroup             = supergroup
20:33:56.638 INFO  FSNamesystem - isPermissionEnabled    = true
20:33:56.638 INFO  FSNamesystem - isStoragePolicyEnabled = true
20:33:56.638 INFO  FSNamesystem - HA Enabled: false
20:33:56.638 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:33:56.639 INFO  DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
20:33:56.639 INFO  DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
20:33:56.639 INFO  BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
20:33:56.639 INFO  BlockManager - The block deletion will start around 2025 Jul 15 20:33:56
20:33:56.639 INFO  GSet - Computing capacity for map BlocksMap
20:33:56.639 INFO  GSet - VM type       = 64-bit
20:33:56.639 INFO  GSet - 2.0% max memory 3.4 GB = 70 MB
20:33:56.639 INFO  GSet - capacity      = 2^23 = 8388608 entries
20:33:56.644 INFO  BlockManager - Storage policy satisfier is disabled
20:33:56.644 INFO  BlockManager - dfs.block.access.token.enable = false
20:33:56.644 INFO  BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
20:33:56.644 INFO  BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
20:33:56.644 INFO  BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
20:33:56.644 INFO  BlockManager - defaultReplication         = 1
20:33:56.644 INFO  BlockManager - maxReplication             = 512
20:33:56.644 INFO  BlockManager - minReplication             = 1
20:33:56.644 INFO  BlockManager - maxReplicationStreams      = 2
20:33:56.644 INFO  BlockManager - redundancyRecheckInterval  = 3000ms
20:33:56.644 INFO  BlockManager - encryptDataTransfer        = false
20:33:56.644 INFO  BlockManager - maxNumBlocksToLog          = 1000
20:33:56.644 INFO  GSet - Computing capacity for map INodeMap
20:33:56.644 INFO  GSet - VM type       = 64-bit
20:33:56.644 INFO  GSet - 1.0% max memory 3.4 GB = 35 MB
20:33:56.644 INFO  GSet - capacity      = 2^22 = 4194304 entries
20:33:56.649 INFO  FSDirectory - ACLs enabled? true
20:33:56.649 INFO  FSDirectory - POSIX ACL inheritance enabled? true
20:33:56.649 INFO  FSDirectory - XAttrs enabled? true
20:33:56.649 INFO  NameNode - Caching file names occurring more than 10 times
20:33:56.649 INFO  SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
20:33:56.649 INFO  SnapshotManager - SkipList is disabled
20:33:56.649 INFO  GSet - Computing capacity for map cachedBlocks
20:33:56.649 INFO  GSet - VM type       = 64-bit
20:33:56.649 INFO  GSet - 0.25% max memory 3.4 GB = 8.8 MB
20:33:56.649 INFO  GSet - capacity      = 2^20 = 1048576 entries
20:33:56.649 INFO  TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
20:33:56.649 INFO  TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
20:33:56.649 INFO  TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
20:33:56.649 INFO  FSNamesystem - Retry cache on namenode is enabled
20:33:56.649 INFO  FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
20:33:56.649 INFO  GSet - Computing capacity for map NameNodeRetryCache
20:33:56.649 INFO  GSet - VM type       = 64-bit
20:33:56.650 INFO  GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
20:33:56.650 INFO  GSet - capacity      = 2^17 = 131072 entries
20:33:56.651 INFO  Storage - Lock on /tmp/minicluster_storage3005522765522698203/name-0-1/in_use.lock acquired by nodename 3077@pkrvmq0rgcvqdmg
20:33:56.652 INFO  Storage - Lock on /tmp/minicluster_storage3005522765522698203/name-0-2/in_use.lock acquired by nodename 3077@pkrvmq0rgcvqdmg
20:33:56.652 INFO  FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage3005522765522698203/name-0-1/current
20:33:56.652 INFO  FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage3005522765522698203/name-0-2/current
20:33:56.652 INFO  FSImage - No edit log streams selected.
20:33:56.652 INFO  FSImage - Planning to load image: FSImageFile(file=/tmp/minicluster_storage3005522765522698203/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
20:33:56.653 INFO  FSImageFormatPBINode - Loading 1 INodes.
20:33:56.654 INFO  FSImageFormatPBINode - Successfully loaded 1 inodes
20:33:56.654 INFO  FSImageFormatPBINode - Completed update blocks map and name cache, total waiting duration 0ms.
20:33:56.654 INFO  FSImageFormatProtobuf - Loaded FSImage in 0 seconds.
20:33:56.654 INFO  FSImage - Loaded image for txid 0 from /tmp/minicluster_storage3005522765522698203/name-0-1/current/fsimage_0000000000000000000
20:33:56.654 INFO  FSNamesystem - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
20:33:56.654 INFO  FSEditLog - Starting log segment at 1
20:33:56.658 INFO  NameCache - initialized with 0 entries 0 lookups
20:33:56.658 INFO  FSNamesystem - Finished loading FSImage in 8 msecs
20:33:56.658 INFO  NameNode - RPC server is binding to localhost:0
20:33:56.658 INFO  NameNode - Enable NameNode state context:false
20:33:56.658 INFO  CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
20:33:56.658 INFO  Server - Listener at localhost:41421
20:33:56.659 INFO  Server - Starting Socket Reader #1 for port 0
20:33:56.661 INFO  NameNode - Clients are to use localhost:41421 to access this namenode/service.
20:33:56.661 INFO  FSNamesystem - Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans.
20:33:56.675 INFO  LeaseManager - Number of blocks under construction: 0
20:33:56.675 INFO  DatanodeAdminDefaultMonitor - Initialized the Default Decommission and Maintenance monitor
20:33:56.676 INFO  BlockManager - Start MarkedDeleteBlockScrubber thread
20:33:56.676 INFO  BlockManager - initializing replication queues
20:33:56.677 INFO  StateChange - STATE* Leaving safe mode after 0 secs
20:33:56.677 INFO  StateChange - STATE* Network topology has 0 racks and 0 datanodes
20:33:56.677 INFO  StateChange - STATE* UnderReplicatedBlocks has 0 blocks
20:33:56.680 INFO  Server - IPC Server Responder: starting
20:33:56.680 INFO  Server - IPC Server listener on 0: starting
20:33:56.681 INFO  BlockManager - Total number of blocks            = 0
20:33:56.681 INFO  BlockManager - Number of invalid blocks          = 0
20:33:56.681 INFO  BlockManager - Number of under-replicated blocks = 0
20:33:56.681 INFO  BlockManager - Number of  over-replicated blocks = 0
20:33:56.681 INFO  BlockManager - Number of blocks being written    = 0
20:33:56.681 INFO  StateChange - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 4 msec
20:33:56.682 INFO  NameNode - NameNode RPC up at: localhost/127.0.0.1:41421
20:33:56.682 WARN  MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
20:33:56.682 INFO  FSNamesystem - Starting services required for active state
20:33:56.682 INFO  FSDirectory - Initializing quota with 12 thread(s)
20:33:56.682 INFO  FSDirectory - Quota initialization completed in 1 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
20:33:56.683 INFO  CacheReplicationMonitor - Starting CacheReplicationMonitor with interval 30000 milliseconds
20:33:56.684 INFO  MiniDFSCluster - Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/tmp/minicluster_storage3005522765522698203/data/data1,[DISK]file:/tmp/minicluster_storage3005522765522698203/data/data2
20:33:56.685 INFO  ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage3005522765522698203/data/data1
20:33:56.685 INFO  ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage3005522765522698203/data/data2
20:33:56.698 INFO  MetricsSystemImpl - DataNode metrics system started (again)
20:33:56.698 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:33:56.698 INFO  BlockScanner - Initialized block scanner with targetBytesPerSec 1048576
20:33:56.698 INFO  DataNode - Configured hostname is 127.0.0.1
20:33:56.698 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:33:56.698 INFO  DataNode - Starting DataNode with maxLockedMemory = 0
20:33:56.698 INFO  DataNode - Opened streaming server at /127.0.0.1:46325
20:33:56.699 INFO  DataNode - Balancing bandwidth is 104857600 bytes/s
20:33:56.699 INFO  DataNode - Number threads for balancing is 100
20:33:56.701 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
20:33:56.702 WARN  HttpRequestLog - Jetty request log can only be enabled using Log4j
20:33:56.703 INFO  HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
20:33:56.704 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
20:33:56.704 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
20:33:56.704 INFO  HttpServer2 - Jetty bound to port 33627
20:33:56.704 INFO  Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
20:33:56.705 INFO  session - DefaultSessionIdManager workerName=node0
20:33:56.705 INFO  session - No SessionScavenger set, using defaults
20:33:56.705 INFO  session - node0 Scavenging every 600000ms
20:33:56.706 INFO  ContextHandler - Started o.e.j.s.ServletContextHandler@40072e1b{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
20:33:56.847 INFO  ContextHandler - Started o.e.j.w.WebAppContext@11185945{datanode,/,file:///tmp/jetty-localhost-33627-hadoop-hdfs-3_3_6-tests_jar-_-any-12952689214917739148/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
20:33:56.847 INFO  AbstractConnector - Started ServerConnector@1a84c9af{HTTP/1.1, (http/1.1)}{localhost:33627}
20:33:56.847 INFO  Server - Started @72597ms
20:33:56.849 WARN  DatanodeHttpServer - Got null for restCsrfPreventionFilter - will not do any filtering.
20:33:56.850 INFO  DatanodeHttpServer - Listening HTTP traffic on /127.0.0.1:42941
20:33:56.850 INFO  DataNode - dnUserName = runner
20:33:56.850 INFO  DataNode - supergroup = supergroup
20:33:56.850 INFO  CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
20:33:56.850 INFO  Server - Listener at localhost:40897
20:33:56.851 INFO  JvmPauseMonitor - Starting JVM pause monitor
20:33:56.852 INFO  Server - Starting Socket Reader #1 for port 0
20:33:56.854 INFO  DataNode - Opened IPC server at /127.0.0.1:40897
20:33:56.869 INFO  DataNode - Refresh request received for nameservices: null
20:33:56.869 INFO  DataNode - Starting BPOfferServices for nameservices: <default>
20:33:56.871 INFO  DataNode - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:41421 starting to offer service
20:33:56.871 WARN  MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
20:33:56.872 INFO  Server - IPC Server Responder: starting
20:33:56.872 INFO  Server - IPC Server listener on 0: starting
20:33:56.875 INFO  DataNode - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:41421
20:33:56.875 INFO  Storage - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2)
20:33:56.875 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=datanodeReport	src=null	dst=null	perm=null	proto=rpc
20:33:56.876 INFO  MiniDFSCluster - dnInfo.length != numDataNodes
20:33:56.876 INFO  MiniDFSCluster - Waiting for cluster to become active
20:33:56.876 INFO  Storage - Lock on /tmp/minicluster_storage3005522765522698203/data/data1/in_use.lock acquired by nodename 3077@pkrvmq0rgcvqdmg
20:33:56.876 INFO  Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage3005522765522698203/data/data1 is not formatted for namespace 1445933681. Formatting...
20:33:56.876 INFO  Storage - Generated new storageID DS-5c39c546-447b-4b8f-b6f2-ab2032193f23 for directory /tmp/minicluster_storage3005522765522698203/data/data1 
20:33:56.878 INFO  Storage - Lock on /tmp/minicluster_storage3005522765522698203/data/data2/in_use.lock acquired by nodename 3077@pkrvmq0rgcvqdmg
20:33:56.878 INFO  Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage3005522765522698203/data/data2 is not formatted for namespace 1445933681. Formatting...
20:33:56.878 INFO  Storage - Generated new storageID DS-f4c87c65-2cc9-4ebb-8c60-f84494e749ac for directory /tmp/minicluster_storage3005522765522698203/data/data2 
20:33:56.892 INFO  Storage - Analyzing storage directories for bpid BP-582100319-10.1.0.111-1752611636488
20:33:56.892 INFO  Storage - Locking is disabled for /tmp/minicluster_storage3005522765522698203/data/data1/current/BP-582100319-10.1.0.111-1752611636488
20:33:56.892 INFO  Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage3005522765522698203/data/data1 and block pool id BP-582100319-10.1.0.111-1752611636488 is not formatted. Formatting ...
20:33:56.892 INFO  Storage - Formatting block pool BP-582100319-10.1.0.111-1752611636488 directory /tmp/minicluster_storage3005522765522698203/data/data1/current/BP-582100319-10.1.0.111-1752611636488/current
20:33:56.910 INFO  Storage - Analyzing storage directories for bpid BP-582100319-10.1.0.111-1752611636488
20:33:56.910 INFO  Storage - Locking is disabled for /tmp/minicluster_storage3005522765522698203/data/data2/current/BP-582100319-10.1.0.111-1752611636488
20:33:56.910 INFO  Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage3005522765522698203/data/data2 and block pool id BP-582100319-10.1.0.111-1752611636488 is not formatted. Formatting ...
20:33:56.910 INFO  Storage - Formatting block pool BP-582100319-10.1.0.111-1752611636488 directory /tmp/minicluster_storage3005522765522698203/data/data2/current/BP-582100319-10.1.0.111-1752611636488/current
20:33:56.911 INFO  DataNode - Setting up storage: nsid=1445933681;bpid=BP-582100319-10.1.0.111-1752611636488;lv=-57;nsInfo=lv=-66;cid=testClusterID;nsid=1445933681;c=1752611636488;bpid=BP-582100319-10.1.0.111-1752611636488;dnuuid=null
20:33:56.912 INFO  DataNode - Generated and persisted new Datanode UUID 0774b3b8-dff5-4281-9052-361ec969c0fe
20:33:56.912 INFO  FsDatasetImpl - The datanode lock is a read write lock
20:33:56.913 INFO  FsDatasetImpl - Added new volume: DS-5c39c546-447b-4b8f-b6f2-ab2032193f23
20:33:56.913 INFO  FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage3005522765522698203/data/data1, StorageType: DISK
20:33:56.914 INFO  FsDatasetImpl - Added new volume: DS-f4c87c65-2cc9-4ebb-8c60-f84494e749ac
20:33:56.914 INFO  FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage3005522765522698203/data/data2, StorageType: DISK
20:33:56.914 INFO  MemoryMappableBlockLoader - Initializing cache loader: MemoryMappableBlockLoader.
20:33:56.914 INFO  FsDatasetImpl - Registered FSDatasetState MBean
20:33:56.915 INFO  FsDatasetImpl - Adding block pool BP-582100319-10.1.0.111-1752611636488
20:33:56.915 INFO  FsDatasetImpl - Scanning block pool BP-582100319-10.1.0.111-1752611636488 on volume /tmp/minicluster_storage3005522765522698203/data/data1...
20:33:56.915 INFO  FsDatasetImpl - Scanning block pool BP-582100319-10.1.0.111-1752611636488 on volume /tmp/minicluster_storage3005522765522698203/data/data2...
20:33:56.915 WARN  FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage3005522765522698203/data/data1/current/BP-582100319-10.1.0.111-1752611636488/current, will proceed with Du for space computation calculation, 
20:33:56.915 WARN  FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage3005522765522698203/data/data2/current/BP-582100319-10.1.0.111-1752611636488/current, will proceed with Du for space computation calculation, 
20:33:56.928 INFO  FsDatasetImpl - Time taken to scan block pool BP-582100319-10.1.0.111-1752611636488 on /tmp/minicluster_storage3005522765522698203/data/data1: 13ms
20:33:56.931 INFO  FsDatasetImpl - Time taken to scan block pool BP-582100319-10.1.0.111-1752611636488 on /tmp/minicluster_storage3005522765522698203/data/data2: 16ms
20:33:56.931 INFO  FsDatasetImpl - Total time to scan all replicas for block pool BP-582100319-10.1.0.111-1752611636488: 16ms
20:33:56.931 INFO  FsDatasetImpl - Adding replicas to map for block pool BP-582100319-10.1.0.111-1752611636488 on volume /tmp/minicluster_storage3005522765522698203/data/data1...
20:33:56.931 INFO  BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage3005522765522698203/data/data1/current/BP-582100319-10.1.0.111-1752611636488/current/replicas doesn't exist 
20:33:56.931 INFO  FsDatasetImpl - Adding replicas to map for block pool BP-582100319-10.1.0.111-1752611636488 on volume /tmp/minicluster_storage3005522765522698203/data/data2...
20:33:56.931 INFO  BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage3005522765522698203/data/data2/current/BP-582100319-10.1.0.111-1752611636488/current/replicas doesn't exist 
20:33:56.931 INFO  FsDatasetImpl - Time to add replicas to map for block pool BP-582100319-10.1.0.111-1752611636488 on volume /tmp/minicluster_storage3005522765522698203/data/data1: 0ms
20:33:56.931 INFO  FsDatasetImpl - Time to add replicas to map for block pool BP-582100319-10.1.0.111-1752611636488 on volume /tmp/minicluster_storage3005522765522698203/data/data2: 1ms
20:33:56.932 INFO  FsDatasetImpl - Total time to add all replicas to map for block pool BP-582100319-10.1.0.111-1752611636488: 1ms
20:33:56.932 INFO  ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage3005522765522698203/data/data1
20:33:56.932 INFO  DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage3005522765522698203/data/data1
20:33:56.932 INFO  ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage3005522765522698203/data/data2
20:33:56.932 INFO  DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage3005522765522698203/data/data2
20:33:56.932 INFO  VolumeScanner - Now scanning bpid BP-582100319-10.1.0.111-1752611636488 on volume /tmp/minicluster_storage3005522765522698203/data/data1
20:33:56.932 WARN  DirectoryScanner - dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value above 1000 ms/sec. Assuming default value of -1
20:33:56.932 INFO  VolumeScanner - Now scanning bpid BP-582100319-10.1.0.111-1752611636488 on volume /tmp/minicluster_storage3005522765522698203/data/data2
20:33:56.932 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3005522765522698203/data/data1, DS-5c39c546-447b-4b8f-b6f2-ab2032193f23): finished scanning block pool BP-582100319-10.1.0.111-1752611636488
20:33:56.932 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3005522765522698203/data/data2, DS-f4c87c65-2cc9-4ebb-8c60-f84494e749ac): finished scanning block pool BP-582100319-10.1.0.111-1752611636488
20:33:56.932 INFO  DirectoryScanner - Periodic Directory Tree Verification scan starting in 835134ms with interval of 21600000ms and throttle limit of -1ms/s
20:33:56.933 INFO  DataNode - Block pool BP-582100319-10.1.0.111-1752611636488 (Datanode Uuid 0774b3b8-dff5-4281-9052-361ec969c0fe) service to localhost/127.0.0.1:41421 beginning handshake with NN
20:33:56.933 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3005522765522698203/data/data2, DS-f4c87c65-2cc9-4ebb-8c60-f84494e749ac): no suitable block pools found to scan.  Waiting 1814399999 ms.
20:33:56.933 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3005522765522698203/data/data1, DS-5c39c546-447b-4b8f-b6f2-ab2032193f23): no suitable block pools found to scan.  Waiting 1814399999 ms.
20:33:56.933 INFO  StateChange - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:46325, datanodeUuid=0774b3b8-dff5-4281-9052-361ec969c0fe, infoPort=42941, infoSecurePort=0, ipcPort=40897, storageInfo=lv=-57;cid=testClusterID;nsid=1445933681;c=1752611636488) storage 0774b3b8-dff5-4281-9052-361ec969c0fe
20:33:56.933 INFO  NetworkTopology - Adding a new node: /default-rack/127.0.0.1:46325
20:33:56.933 INFO  BlockReportLeaseManager - Registered DN 0774b3b8-dff5-4281-9052-361ec969c0fe (127.0.0.1:46325).
20:33:56.934 INFO  DataNode - Block pool BP-582100319-10.1.0.111-1752611636488 (Datanode Uuid 0774b3b8-dff5-4281-9052-361ec969c0fe) service to localhost/127.0.0.1:41421 successfully registered with NN
20:33:56.934 INFO  DataNode - For namenode localhost/127.0.0.1:41421 using BLOCKREPORT_INTERVAL of 21600000msecs CACHEREPORT_INTERVAL of 10000msecs Initial delay: 0msecs; heartBeatInterval=3000
20:33:56.934 INFO  DataNode - Starting IBR Task Handler.
20:33:56.934 INFO  DatanodeDescriptor - Adding new storage ID DS-5c39c546-447b-4b8f-b6f2-ab2032193f23 for DN 127.0.0.1:46325
20:33:56.934 INFO  DatanodeDescriptor - Adding new storage ID DS-f4c87c65-2cc9-4ebb-8c60-f84494e749ac for DN 127.0.0.1:46325
20:33:56.935 INFO  DataNode - After receiving heartbeat response, updating state of namenode localhost:41421 to active
20:33:56.935 INFO  BlockStateChange - BLOCK* processReport 0x230d9368019073b5 with lease ID 0xe2f37c48317fec8f: Processing first storage report for DS-5c39c546-447b-4b8f-b6f2-ab2032193f23 from datanode DatanodeRegistration(127.0.0.1:46325, datanodeUuid=0774b3b8-dff5-4281-9052-361ec969c0fe, infoPort=42941, infoSecurePort=0, ipcPort=40897, storageInfo=lv=-57;cid=testClusterID;nsid=1445933681;c=1752611636488)
20:33:56.935 INFO  BlockStateChange - BLOCK* processReport 0x230d9368019073b5 with lease ID 0xe2f37c48317fec8f: from storage DS-5c39c546-447b-4b8f-b6f2-ab2032193f23 node DatanodeRegistration(127.0.0.1:46325, datanodeUuid=0774b3b8-dff5-4281-9052-361ec969c0fe, infoPort=42941, infoSecurePort=0, ipcPort=40897, storageInfo=lv=-57;cid=testClusterID;nsid=1445933681;c=1752611636488), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0
20:33:56.936 INFO  BlockStateChange - BLOCK* processReport 0x230d9368019073b5 with lease ID 0xe2f37c48317fec8f: Processing first storage report for DS-f4c87c65-2cc9-4ebb-8c60-f84494e749ac from datanode DatanodeRegistration(127.0.0.1:46325, datanodeUuid=0774b3b8-dff5-4281-9052-361ec969c0fe, infoPort=42941, infoSecurePort=0, ipcPort=40897, storageInfo=lv=-57;cid=testClusterID;nsid=1445933681;c=1752611636488)
20:33:56.936 INFO  BlockStateChange - BLOCK* processReport 0x230d9368019073b5 with lease ID 0xe2f37c48317fec8f: from storage DS-f4c87c65-2cc9-4ebb-8c60-f84494e749ac node DatanodeRegistration(127.0.0.1:46325, datanodeUuid=0774b3b8-dff5-4281-9052-361ec969c0fe, infoPort=42941, infoSecurePort=0, ipcPort=40897, storageInfo=lv=-57;cid=testClusterID;nsid=1445933681;c=1752611636488), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
20:33:56.936 INFO  DataNode - Successfully sent block report 0x230d9368019073b5 with lease ID 0xe2f37c48317fec8f to namenode: localhost/127.0.0.1:41421,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 0 msecs to generate and 1 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
20:33:56.936 INFO  DataNode - Got finalize command for block pool BP-582100319-10.1.0.111-1752611636488
20:33:56.976 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=datanodeReport	src=null	dst=null	perm=null	proto=rpc
20:33:56.977 INFO  MiniDFSCluster - Cluster is active
20:33:56.979 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:56.981 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/user/runner/hdfs.cram	dst=null	perm=runner:supergroup:rw-r--r--	proto=rpc
20:33:56.983 INFO  StateChange - BLOCK* allocate blk_1073741825_1001, replicas=127.0.0.1:46325 for /user/runner/hdfs.cram
20:33:56.984 INFO  DataNode - Receiving BP-582100319-10.1.0.111-1752611636488:blk_1073741825_1001 src: /127.0.0.1:33504 dest: /127.0.0.1:46325
20:33:56.985 INFO  clienttrace - src: /127.0.0.1:33504, dest: /127.0.0.1:46325, bytes: 50619, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1851012337_1, offset: 0, srvID: 0774b3b8-dff5-4281-9052-361ec969c0fe, blockid: BP-582100319-10.1.0.111-1752611636488:blk_1073741825_1001, duration(ns): 503040
20:33:56.985 INFO  DataNode - PacketResponder: BP-582100319-10.1.0.111-1752611636488:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating
20:33:56.986 INFO  FSNamesystem - BLOCK* blk_1073741825_1001 is COMMITTED but not COMPLETE(numNodes= 0 <  minimum = 1) in file /user/runner/hdfs.cram
20:33:57.387 INFO  StateChange - DIR* completeFile: /user/runner/hdfs.cram is closed by DFSClient_NONMAPREDUCE_-1851012337_1
20:33:57.388 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
20:33:57.388 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/user/runner/hdfs.fasta	dst=null	perm=runner:supergroup:rw-r--r--	proto=rpc
20:33:57.389 INFO  StateChange - BLOCK* allocate blk_1073741826_1002, replicas=127.0.0.1:46325 for /user/runner/hdfs.fasta
20:33:57.390 INFO  DataNode - Receiving BP-582100319-10.1.0.111-1752611636488:blk_1073741826_1002 src: /127.0.0.1:48052 dest: /127.0.0.1:46325
20:33:57.393 INFO  clienttrace - src: /127.0.0.1:48052, dest: /127.0.0.1:46325, bytes: 1016671, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1851012337_1, offset: 0, srvID: 0774b3b8-dff5-4281-9052-361ec969c0fe, blockid: BP-582100319-10.1.0.111-1752611636488:blk_1073741826_1002, duration(ns): 2359654
20:33:57.393 INFO  DataNode - PacketResponder: BP-582100319-10.1.0.111-1752611636488:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating
20:33:57.394 INFO  FSNamesystem - BLOCK* blk_1073741826_1002 is COMMITTED but not COMPLETE(numNodes= 0 <  minimum = 1) in file /user/runner/hdfs.fasta
20:33:57.795 INFO  StateChange - DIR* completeFile: /user/runner/hdfs.fasta is closed by DFSClient_NONMAPREDUCE_-1851012337_1
20:33:57.796 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
20:33:57.796 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/user/runner/hdfs.fasta.fai	dst=null	perm=runner:supergroup:rw-r--r--	proto=rpc
20:33:57.797 INFO  StateChange - BLOCK* allocate blk_1073741827_1003, replicas=127.0.0.1:46325 for /user/runner/hdfs.fasta.fai
20:33:57.798 INFO  DataNode - Receiving BP-582100319-10.1.0.111-1752611636488:blk_1073741827_1003 src: /127.0.0.1:48062 dest: /127.0.0.1:46325
20:33:57.799 INFO  clienttrace - src: /127.0.0.1:48062, dest: /127.0.0.1:46325, bytes: 19, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1851012337_1, offset: 0, srvID: 0774b3b8-dff5-4281-9052-361ec969c0fe, blockid: BP-582100319-10.1.0.111-1752611636488:blk_1073741827_1003, duration(ns): 540688
20:33:57.799 INFO  DataNode - PacketResponder: BP-582100319-10.1.0.111-1752611636488:blk_1073741827_1003, type=LAST_IN_PIPELINE terminating
20:33:57.800 INFO  StateChange - DIR* completeFile: /user/runner/hdfs.fasta.fai is closed by DFSClient_NONMAPREDUCE_-1851012337_1
20:33:57.800 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
20:33:57.801 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.801 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.801 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.802 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.802 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram.crai	dst=null	perm=null	proto=rpc
20:33:57.803 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.crai	dst=null	perm=null	proto=rpc
20:33:57.803 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
20:33:57.803 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
20:33:57.804 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
20:33:57.804 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
20:33:57.807 WARN  DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
20:33:57.808 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.808 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.808 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram.crai	dst=null	perm=null	proto=rpc
20:33:57.809 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.crai	dst=null	perm=null	proto=rpc
20:33:57.809 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.810 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.811 INFO  MemoryStore - Block broadcast_587 stored as values in memory (estimated size 472.0 B, free 1920.0 MiB)
20:33:57.811 INFO  MemoryStore - Block broadcast_587_piece0 stored as bytes in memory (estimated size 129.0 B, free 1920.0 MiB)
20:33:57.812 INFO  BlockManagerInfo - Added broadcast_587_piece0 in memory on localhost:45281 (size: 129.0 B, free: 1920.0 MiB)
20:33:57.812 INFO  SparkContext - Created broadcast 587 from broadcast at CramSource.java:114
20:33:57.813 INFO  MemoryStore - Block broadcast_588 stored as values in memory (estimated size 297.9 KiB, free 1919.7 MiB)
20:33:57.824 INFO  MemoryStore - Block broadcast_588_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
20:33:57.824 INFO  BlockManagerInfo - Added broadcast_588_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1920.0 MiB)
20:33:57.824 INFO  SparkContext - Created broadcast 588 from newAPIHadoopFile at PathSplitSource.java:96
20:33:57.841 INFO  MemoryStore - Block broadcast_589 stored as values in memory (estimated size 536.0 B, free 1919.7 MiB)
20:33:57.842 INFO  MemoryStore - Block broadcast_589_piece0 stored as bytes in memory (estimated size 187.0 B, free 1919.7 MiB)
20:33:57.842 INFO  BlockManagerInfo - Added broadcast_589_piece0 in memory on localhost:45281 (size: 187.0 B, free: 1920.0 MiB)
20:33:57.842 INFO  SparkContext - Created broadcast 589 from broadcast at CramSource.java:114
20:33:57.843 INFO  MemoryStore - Block broadcast_590 stored as values in memory (estimated size 297.9 KiB, free 1919.4 MiB)
20:33:57.853 INFO  MemoryStore - Block broadcast_590_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.3 MiB)
20:33:57.853 INFO  BlockManagerInfo - Added broadcast_590_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.9 MiB)
20:33:57.853 INFO  SparkContext - Created broadcast 590 from newAPIHadoopFile at PathSplitSource.java:96
20:33:57.867 INFO  FileInputFormat - Total input files to process : 1
20:33:57.892 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:244
20:33:57.892 INFO  DAGScheduler - Got job 221 (collect at ReadsSparkSourceUnitTest.java:244) with 1 output partitions
20:33:57.892 INFO  DAGScheduler - Final stage: ResultStage 285 (collect at ReadsSparkSourceUnitTest.java:244)
20:33:57.892 INFO  DAGScheduler - Parents of final stage: List()
20:33:57.892 INFO  DAGScheduler - Missing parents: List()
20:33:57.892 INFO  DAGScheduler - Submitting ResultStage 285 (MapPartitionsRDD[1405] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:57.904 INFO  MemoryStore - Block broadcast_591 stored as values in memory (estimated size 286.8 KiB, free 1919.0 MiB)
20:33:57.905 INFO  MemoryStore - Block broadcast_591_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.9 MiB)
20:33:57.905 INFO  BlockManagerInfo - Added broadcast_591_piece0 in memory on localhost:45281 (size: 103.6 KiB, free: 1919.8 MiB)
20:33:57.905 INFO  SparkContext - Created broadcast 591 from broadcast at DAGScheduler.scala:1580
20:33:57.905 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 285 (MapPartitionsRDD[1405] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:33:57.905 INFO  TaskSchedulerImpl - Adding task set 285.0 with 1 tasks resource profile 0
20:33:57.906 INFO  TaskSetManager - Starting task 0.0 in stage 285.0 (TID 372) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes) 
20:33:57.906 INFO  Executor - Running task 0.0 in stage 285.0 (TID 372)
20:33:57.928 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
20:33:57.936 INFO  Executor - Finished task 0.0 in stage 285.0 (TID 372). 154101 bytes result sent to driver
20:33:57.937 INFO  TaskSetManager - Finished task 0.0 in stage 285.0 (TID 372) in 31 ms on localhost (executor driver) (1/1)
20:33:57.937 INFO  TaskSchedulerImpl - Removed TaskSet 285.0, whose tasks have all completed, from pool 
20:33:57.937 INFO  DAGScheduler - ResultStage 285 (collect at ReadsSparkSourceUnitTest.java:244) finished in 0.044 s
20:33:57.937 INFO  DAGScheduler - Job 221 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:57.937 INFO  TaskSchedulerImpl - Killing all running tasks in stage 285: Stage finished
20:33:57.937 INFO  DAGScheduler - Job 221 finished: collect at ReadsSparkSourceUnitTest.java:244, took 0.044950 s
20:33:57.938 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
20:33:57.938 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.939 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.939 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.940 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.940 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram.crai	dst=null	perm=null	proto=rpc
20:33:57.941 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.crai	dst=null	perm=null	proto=rpc
20:33:57.941 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
20:33:57.942 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
20:33:57.942 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
20:33:57.943 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
20:33:57.944 WARN  DFSUtil - Unexpected value for data transfer bytes=23 duration=0
20:33:57.945 WARN  DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
20:33:57.946 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.946 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.947 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram.crai	dst=null	perm=null	proto=rpc
20:33:57.947 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.crai	dst=null	perm=null	proto=rpc
20:33:57.948 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.948 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.949 WARN  DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
20:33:57.949 INFO  MemoryStore - Block broadcast_592 stored as values in memory (estimated size 472.0 B, free 1918.9 MiB)
20:33:57.950 INFO  MemoryStore - Block broadcast_592_piece0 stored as bytes in memory (estimated size 129.0 B, free 1918.9 MiB)
20:33:57.950 INFO  BlockManagerInfo - Added broadcast_592_piece0 in memory on localhost:45281 (size: 129.0 B, free: 1919.8 MiB)
20:33:57.950 INFO  SparkContext - Created broadcast 592 from broadcast at CramSource.java:114
20:33:57.951 INFO  MemoryStore - Block broadcast_593 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
20:33:57.957 INFO  MemoryStore - Block broadcast_593_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
20:33:57.957 INFO  BlockManagerInfo - Added broadcast_593_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.8 MiB)
20:33:57.958 INFO  SparkContext - Created broadcast 593 from newAPIHadoopFile at PathSplitSource.java:96
20:33:57.973 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:57.973 INFO  FileInputFormat - Total input files to process : 1
20:33:57.974 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:58.003 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:245
20:33:58.003 INFO  DAGScheduler - Got job 222 (collect at ReadsSparkSourceUnitTest.java:245) with 1 output partitions
20:33:58.003 INFO  DAGScheduler - Final stage: ResultStage 286 (collect at ReadsSparkSourceUnitTest.java:245)
20:33:58.003 INFO  DAGScheduler - Parents of final stage: List()
20:33:58.003 INFO  DAGScheduler - Missing parents: List()
20:33:58.004 INFO  DAGScheduler - Submitting ResultStage 286 (MapPartitionsRDD[1411] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:58.015 INFO  MemoryStore - Block broadcast_594 stored as values in memory (estimated size 286.8 KiB, free 1918.3 MiB)
20:33:58.016 INFO  MemoryStore - Block broadcast_594_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.2 MiB)
20:33:58.016 INFO  BlockManagerInfo - Added broadcast_594_piece0 in memory on localhost:45281 (size: 103.6 KiB, free: 1919.7 MiB)
20:33:58.016 INFO  SparkContext - Created broadcast 594 from broadcast at DAGScheduler.scala:1580
20:33:58.016 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 286 (MapPartitionsRDD[1411] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:33:58.016 INFO  TaskSchedulerImpl - Adding task set 286.0 with 1 tasks resource profile 0
20:33:58.017 INFO  TaskSetManager - Starting task 0.0 in stage 286.0 (TID 373) (localhost, executor driver, partition 0, ANY, 7797 bytes) 
20:33:58.017 INFO  Executor - Running task 0.0 in stage 286.0 (TID 373)
20:33:58.038 INFO  NewHadoopRDD - Input split: hdfs://localhost:41421/user/runner/hdfs.cram:0+50619
20:33:58.039 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:58.039 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
20:33:58.040 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram.crai	dst=null	perm=null	proto=rpc
20:33:58.040 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.crai	dst=null	perm=null	proto=rpc
20:33:58.040 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
20:33:58.041 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
20:33:58.041 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
20:33:58.042 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
20:33:58.042 WARN  DFSUtil - Unexpected value for data transfer bytes=23 duration=0
20:33:58.043 WARN  DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
20:33:58.058 INFO  BlockManagerInfo - Removed broadcast_589_piece0 on localhost:45281 in memory (size: 187.0 B, free: 1919.7 MiB)
20:33:58.059 INFO  BlockManagerInfo - Removed broadcast_590_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.7 MiB)
20:33:58.059 INFO  BlockManagerInfo - Removed broadcast_587_piece0 on localhost:45281 in memory (size: 129.0 B, free: 1919.7 MiB)
20:33:58.060 INFO  BlockManagerInfo - Removed broadcast_588_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.7 MiB)
20:33:58.060 WARN  DFSUtil - Unexpected value for data transfer bytes=232039 duration=0
20:33:58.060 INFO  BlockManagerInfo - Removed broadcast_591_piece0 on localhost:45281 in memory (size: 103.6 KiB, free: 1919.8 MiB)
20:33:58.061 WARN  DFSUtil - Unexpected value for data transfer bytes=99943 duration=0
20:33:58.064 INFO  Executor - Finished task 0.0 in stage 286.0 (TID 373). 154144 bytes result sent to driver
20:33:58.065 INFO  TaskSetManager - Finished task 0.0 in stage 286.0 (TID 373) in 48 ms on localhost (executor driver) (1/1)
20:33:58.065 INFO  TaskSchedulerImpl - Removed TaskSet 286.0, whose tasks have all completed, from pool 
20:33:58.065 INFO  DAGScheduler - ResultStage 286 (collect at ReadsSparkSourceUnitTest.java:245) finished in 0.061 s
20:33:58.065 INFO  DAGScheduler - Job 222 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:58.065 INFO  TaskSchedulerImpl - Killing all running tasks in stage 286: Stage finished
20:33:58.065 INFO  DAGScheduler - Job 222 finished: collect at ReadsSparkSourceUnitTest.java:245, took 0.062030 s
20:33:58.068 INFO  MiniDFSCluster - Shutting down the Mini HDFS Cluster
20:33:58.068 INFO  MiniDFSCluster - Shutting down DataNode 0
20:33:58.068 INFO  DirectoryScanner - Shutdown has been called
20:33:58.068 INFO  DataNode - Closing all peers.
20:33:58.068 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3005522765522698203/data/data1, DS-5c39c546-447b-4b8f-b6f2-ab2032193f23) exiting.
20:33:58.068 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3005522765522698203/data/data2, DS-f4c87c65-2cc9-4ebb-8c60-f84494e749ac) exiting.
20:33:58.071 INFO  ContextHandler - Stopped o.e.j.w.WebAppContext@11185945{datanode,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
20:33:58.071 INFO  AbstractConnector - Stopped ServerConnector@1a84c9af{HTTP/1.1, (http/1.1)}{localhost:0}
20:33:58.071 INFO  session - node0 Stopped scavenging
20:33:58.071 INFO  ContextHandler - Stopped o.e.j.s.ServletContextHandler@40072e1b{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
20:33:58.072 INFO  DataNode - Waiting up to 30 seconds for transfer threads to complete
20:33:58.072 INFO  Server - Stopping server on 40897
20:33:58.073 INFO  Server - Stopping IPC Server Responder
20:33:58.073 INFO  Server - Stopping IPC Server listener on 0
20:33:58.073 WARN  IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
20:33:58.073 WARN  DataNode - Ending block pool service for: Block pool BP-582100319-10.1.0.111-1752611636488 (Datanode Uuid 0774b3b8-dff5-4281-9052-361ec969c0fe) service to localhost/127.0.0.1:41421
20:33:58.073 INFO  DataNode - Removed Block pool BP-582100319-10.1.0.111-1752611636488 (Datanode Uuid 0774b3b8-dff5-4281-9052-361ec969c0fe)
20:33:58.073 INFO  FsDatasetImpl - Removing block pool BP-582100319-10.1.0.111-1752611636488
20:33:58.073 WARN  IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
20:33:58.074 ERROR DataNode - Command processor encountered interrupt and exit.
20:33:58.074 WARN  DataNode - Ending command processor service for: Thread[Command processor,5,main]
20:33:58.074 WARN  CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
20:33:58.074 WARN  CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
20:33:58.074 INFO  FsDatasetAsyncDiskService - Shutting down all async disk service threads
20:33:58.074 INFO  FsDatasetAsyncDiskService - All async disk service threads have been shut down
20:33:58.074 INFO  RamDiskAsyncLazyPersistService - Shutting down all async lazy persist service threads
20:33:58.074 INFO  RamDiskAsyncLazyPersistService - All async lazy persist service threads have been shut down
20:33:58.074 INFO  DataNode - Shutdown complete.
20:33:58.074 INFO  MiniDFSCluster - Shutting down the namenode
20:33:58.074 INFO  FSNamesystem - Stopping services started for active state
20:33:58.074 INFO  FSEditLog - Ending log segment 1, 18
20:33:58.075 INFO  FSNamesystem - NameNodeEditLogRoller was interrupted, exiting
20:33:58.075 INFO  FSNamesystem - LazyPersistFileScrubber was interrupted, exiting
20:33:58.075 INFO  FSEditLog - Number of transactions: 19 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 3 Number of syncs: 17 SyncTimes(ms): 1 1 
20:33:58.075 INFO  FileJournalManager - Finalizing edits file /tmp/minicluster_storage3005522765522698203/name-0-1/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage3005522765522698203/name-0-1/current/edits_0000000000000000001-0000000000000000019
20:33:58.076 INFO  FileJournalManager - Finalizing edits file /tmp/minicluster_storage3005522765522698203/name-0-2/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage3005522765522698203/name-0-2/current/edits_0000000000000000001-0000000000000000019
20:33:58.076 INFO  FSEditLog - FSEditLogAsync was interrupted, exiting
20:33:58.076 INFO  CacheReplicationMonitor - Shutting down CacheReplicationMonitor
20:33:58.076 INFO  Server - Stopping server on 41421
20:33:58.076 INFO  Server - Stopping IPC Server listener on 0
20:33:58.076 INFO  Server - Stopping IPC Server Responder
20:33:58.077 INFO  BlockManager - Stopping RedundancyMonitor.
20:33:58.077 INFO  BlockManager - Stopping MarkedDeleteBlockScrubber.
20:33:58.082 INFO  FSNamesystem - Stopping services started for active state
20:33:58.082 INFO  FSNamesystem - Stopping services started for standby state
20:33:58.084 INFO  ContextHandler - Stopped o.e.j.w.WebAppContext@30bd80bd{hdfs,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
20:33:58.084 INFO  AbstractConnector - Stopped ServerConnector@3a6f9780{HTTP/1.1, (http/1.1)}{localhost:0}
20:33:58.084 INFO  session - node0 Stopped scavenging
20:33:58.084 INFO  ContextHandler - Stopped o.e.j.s.ServletContextHandler@150a31a{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
20:33:58.085 INFO  MetricsSystemImpl - Stopping DataNode metrics system...
20:33:58.085 INFO  MetricsSystemImpl - DataNode metrics system stopped.
20:33:58.085 INFO  MetricsSystemImpl - DataNode metrics system shutdown complete.
20:33:58.097 INFO  MemoryStore - Block broadcast_595 stored as values in memory (estimated size 297.9 KiB, free 1919.0 MiB)
20:33:58.103 INFO  MemoryStore - Block broadcast_595_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.9 MiB)
20:33:58.103 INFO  BlockManagerInfo - Added broadcast_595_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.8 MiB)
20:33:58.103 INFO  SparkContext - Created broadcast 595 from newAPIHadoopFile at PathSplitSource.java:96
20:33:58.130 INFO  MemoryStore - Block broadcast_596 stored as values in memory (estimated size 298.0 KiB, free 1918.6 MiB)
20:33:58.136 INFO  MemoryStore - Block broadcast_596_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.6 MiB)
20:33:58.136 INFO  BlockManagerInfo - Added broadcast_596_piece0 in memory on localhost:45281 (size: 50.3 KiB, free: 1919.8 MiB)
20:33:58.136 INFO  SparkContext - Created broadcast 596 from newAPIHadoopFile at PathSplitSource.java:96
20:33:58.163 INFO  MemoryStore - Block broadcast_597 stored as values in memory (estimated size 298.0 KiB, free 1918.3 MiB)
20:33:58.169 INFO  MemoryStore - Block broadcast_597_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.3 MiB)
20:33:58.169 INFO  BlockManagerInfo - Added broadcast_597_piece0 in memory on localhost:45281 (size: 50.3 KiB, free: 1919.7 MiB)
20:33:58.170 INFO  SparkContext - Created broadcast 597 from newAPIHadoopFile at PathSplitSource.java:96
20:33:58.191 INFO  MemoryStore - Block broadcast_598 stored as values in memory (estimated size 536.0 B, free 1918.3 MiB)
20:33:58.191 INFO  MemoryStore - Block broadcast_598_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.3 MiB)
20:33:58.191 INFO  BlockManagerInfo - Added broadcast_598_piece0 in memory on localhost:45281 (size: 187.0 B, free: 1919.7 MiB)
20:33:58.192 INFO  SparkContext - Created broadcast 598 from broadcast at CramSource.java:114
20:33:58.192 INFO  MemoryStore - Block broadcast_599 stored as values in memory (estimated size 297.9 KiB, free 1918.0 MiB)
20:33:58.198 INFO  MemoryStore - Block broadcast_599_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.9 MiB)
20:33:58.199 INFO  BlockManagerInfo - Added broadcast_599_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.7 MiB)
20:33:58.199 INFO  SparkContext - Created broadcast 599 from newAPIHadoopFile at PathSplitSource.java:96
20:33:58.214 INFO  MemoryStore - Block broadcast_600 stored as values in memory (estimated size 608.0 B, free 1917.9 MiB)
20:33:58.214 INFO  MemoryStore - Block broadcast_600_piece0 stored as bytes in memory (estimated size 206.0 B, free 1917.9 MiB)
20:33:58.214 INFO  BlockManagerInfo - Added broadcast_600_piece0 in memory on localhost:45281 (size: 206.0 B, free: 1919.7 MiB)
20:33:58.215 INFO  SparkContext - Created broadcast 600 from broadcast at CramSource.java:114
20:33:58.215 INFO  MemoryStore - Block broadcast_601 stored as values in memory (estimated size 297.9 KiB, free 1917.6 MiB)
20:33:58.221 INFO  MemoryStore - Block broadcast_601_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.6 MiB)
20:33:58.222 INFO  BlockManagerInfo - Added broadcast_601_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.6 MiB)
20:33:58.222 INFO  SparkContext - Created broadcast 601 from newAPIHadoopFile at PathSplitSource.java:96
20:33:58.237 INFO  MemoryStore - Block broadcast_602 stored as values in memory (estimated size 298.0 KiB, free 1917.3 MiB)
20:33:58.243 INFO  MemoryStore - Block broadcast_602_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.2 MiB)
20:33:58.243 INFO  BlockManagerInfo - Added broadcast_602_piece0 in memory on localhost:45281 (size: 50.3 KiB, free: 1919.6 MiB)
20:33:58.244 INFO  SparkContext - Created broadcast 602 from newAPIHadoopFile at PathSplitSource.java:96
20:33:58.264 INFO  FileInputFormat - Total input files to process : 1
20:33:58.300 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:163
20:33:58.300 INFO  DAGScheduler - Got job 223 (collect at ReadsSparkSourceUnitTest.java:163) with 1 output partitions
20:33:58.300 INFO  DAGScheduler - Final stage: ResultStage 287 (collect at ReadsSparkSourceUnitTest.java:163)
20:33:58.300 INFO  DAGScheduler - Parents of final stage: List()
20:33:58.300 INFO  DAGScheduler - Missing parents: List()
20:33:58.301 INFO  DAGScheduler - Submitting ResultStage 287 (MapPartitionsRDD[1441] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:58.317 INFO  MemoryStore - Block broadcast_603 stored as values in memory (estimated size 426.2 KiB, free 1916.8 MiB)
20:33:58.321 INFO  BlockManagerInfo - Removed broadcast_599_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.6 MiB)
20:33:58.322 INFO  BlockManagerInfo - Removed broadcast_600_piece0 on localhost:45281 in memory (size: 206.0 B, free: 1919.6 MiB)
20:33:58.322 INFO  BlockManagerInfo - Removed broadcast_597_piece0 on localhost:45281 in memory (size: 50.3 KiB, free: 1919.7 MiB)
20:33:58.322 INFO  MemoryStore - Block broadcast_603_piece0 stored as bytes in memory (estimated size 153.6 KiB, free 1917.4 MiB)
20:33:58.322 INFO  BlockManagerInfo - Added broadcast_603_piece0 in memory on localhost:45281 (size: 153.6 KiB, free: 1919.5 MiB)
20:33:58.322 INFO  BlockManagerInfo - Removed broadcast_598_piece0 on localhost:45281 in memory (size: 187.0 B, free: 1919.5 MiB)
20:33:58.322 INFO  SparkContext - Created broadcast 603 from broadcast at DAGScheduler.scala:1580
20:33:58.323 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 287 (MapPartitionsRDD[1441] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:33:58.323 INFO  TaskSchedulerImpl - Adding task set 287.0 with 1 tasks resource profile 0
20:33:58.323 INFO  BlockManagerInfo - Removed broadcast_601_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.6 MiB)
20:33:58.323 INFO  BlockManagerInfo - Removed broadcast_596_piece0 on localhost:45281 in memory (size: 50.3 KiB, free: 1919.6 MiB)
20:33:58.323 INFO  TaskSetManager - Starting task 0.0 in stage 287.0 (TID 374) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7883 bytes) 
20:33:58.323 INFO  Executor - Running task 0.0 in stage 287.0 (TID 374)
20:33:58.324 INFO  BlockManagerInfo - Removed broadcast_595_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.7 MiB)
20:33:58.358 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam:0+221635
20:33:58.368 INFO  Executor - Finished task 0.0 in stage 287.0 (TID 374). 650141 bytes result sent to driver
20:33:58.370 INFO  TaskSetManager - Finished task 0.0 in stage 287.0 (TID 374) in 46 ms on localhost (executor driver) (1/1)
20:33:58.370 INFO  TaskSchedulerImpl - Removed TaskSet 287.0, whose tasks have all completed, from pool 
20:33:58.370 INFO  DAGScheduler - ResultStage 287 (collect at ReadsSparkSourceUnitTest.java:163) finished in 0.069 s
20:33:58.370 INFO  DAGScheduler - Job 223 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:58.370 INFO  TaskSchedulerImpl - Killing all running tasks in stage 287: Stage finished
20:33:58.370 INFO  DAGScheduler - Job 223 finished: collect at ReadsSparkSourceUnitTest.java:163, took 0.069804 s
20:33:58.372 INFO  MemoryStore - Block broadcast_604 stored as values in memory (estimated size 208.0 B, free 1918.4 MiB)
20:33:58.374 INFO  MemoryStore - Block broadcast_604_piece0 stored as bytes in memory (estimated size 222.0 B, free 1918.4 MiB)
20:33:58.375 INFO  BlockManagerInfo - Added broadcast_604_piece0 in memory on localhost:45281 (size: 222.0 B, free: 1919.7 MiB)
20:33:58.375 INFO  SparkContext - Created broadcast 604 from broadcast at AbstractBinarySamSource.java:82
20:33:58.375 INFO  MemoryStore - Block broadcast_605 stored as values in memory (estimated size 297.9 KiB, free 1918.1 MiB)
20:33:58.382 INFO  MemoryStore - Block broadcast_605_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.0 MiB)
20:33:58.382 INFO  BlockManagerInfo - Added broadcast_605_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.6 MiB)
20:33:58.382 INFO  SparkContext - Created broadcast 605 from newAPIHadoopFile at PathSplitSource.java:96
20:33:58.403 INFO  FileInputFormat - Total input files to process : 1
20:33:58.406 INFO  SparkContext - Starting job: count at ReadsSparkSourceUnitTest.java:265
20:33:58.406 INFO  DAGScheduler - Got job 224 (count at ReadsSparkSourceUnitTest.java:265) with 1 output partitions
20:33:58.406 INFO  DAGScheduler - Final stage: ResultStage 288 (count at ReadsSparkSourceUnitTest.java:265)
20:33:58.406 INFO  DAGScheduler - Parents of final stage: List()
20:33:58.406 INFO  DAGScheduler - Missing parents: List()
20:33:58.406 INFO  DAGScheduler - Submitting ResultStage 288 (MapPartitionsRDD[1448] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:58.429 INFO  MemoryStore - Block broadcast_606 stored as values in memory (estimated size 426.2 KiB, free 1917.6 MiB)
20:33:58.430 INFO  MemoryStore - Block broadcast_606_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1917.5 MiB)
20:33:58.430 INFO  BlockManagerInfo - Added broadcast_606_piece0 in memory on localhost:45281 (size: 153.7 KiB, free: 1919.5 MiB)
20:33:58.431 INFO  SparkContext - Created broadcast 606 from broadcast at DAGScheduler.scala:1580
20:33:58.431 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 288 (MapPartitionsRDD[1448] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:33:58.431 INFO  TaskSchedulerImpl - Adding task set 288.0 with 1 tasks resource profile 0
20:33:58.431 INFO  TaskSetManager - Starting task 0.0 in stage 288.0 (TID 375) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes) 
20:33:58.432 INFO  Executor - Running task 0.0 in stage 288.0 (TID 375)
20:33:58.461 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
20:33:58.465 INFO  Executor - Finished task 0.0 in stage 288.0 (TID 375). 989 bytes result sent to driver
20:33:58.465 INFO  TaskSetManager - Finished task 0.0 in stage 288.0 (TID 375) in 34 ms on localhost (executor driver) (1/1)
20:33:58.465 INFO  TaskSchedulerImpl - Removed TaskSet 288.0, whose tasks have all completed, from pool 
20:33:58.465 INFO  DAGScheduler - ResultStage 288 (count at ReadsSparkSourceUnitTest.java:265) finished in 0.058 s
20:33:58.465 INFO  DAGScheduler - Job 224 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:58.465 INFO  TaskSchedulerImpl - Killing all running tasks in stage 288: Stage finished
20:33:58.465 INFO  DAGScheduler - Job 224 finished: count at ReadsSparkSourceUnitTest.java:265, took 0.059439 s
20:33:58.468 INFO  MemoryStore - Block broadcast_607 stored as values in memory (estimated size 160.0 B, free 1917.5 MiB)
20:33:58.469 INFO  MemoryStore - Block broadcast_607_piece0 stored as bytes in memory (estimated size 212.0 B, free 1917.5 MiB)
20:33:58.469 INFO  BlockManagerInfo - Added broadcast_607_piece0 in memory on localhost:45281 (size: 212.0 B, free: 1919.5 MiB)
20:33:58.469 INFO  SparkContext - Created broadcast 607 from broadcast at AbstractBinarySamSource.java:82
20:33:58.470 INFO  MemoryStore - Block broadcast_608 stored as values in memory (estimated size 298.0 KiB, free 1917.2 MiB)
20:33:58.476 INFO  MemoryStore - Block broadcast_608_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.1 MiB)
20:33:58.476 INFO  BlockManagerInfo - Added broadcast_608_piece0 in memory on localhost:45281 (size: 50.3 KiB, free: 1919.4 MiB)
20:33:58.476 INFO  SparkContext - Created broadcast 608 from newAPIHadoopFile at PathSplitSource.java:96
20:33:58.497 INFO  FileInputFormat - Total input files to process : 1
20:33:58.501 INFO  SparkContext - Starting job: count at ReadsSparkSourceUnitTest.java:286
20:33:58.501 INFO  DAGScheduler - Got job 225 (count at ReadsSparkSourceUnitTest.java:286) with 1 output partitions
20:33:58.501 INFO  DAGScheduler - Final stage: ResultStage 289 (count at ReadsSparkSourceUnitTest.java:286)
20:33:58.501 INFO  DAGScheduler - Parents of final stage: List()
20:33:58.501 INFO  DAGScheduler - Missing parents: List()
20:33:58.501 INFO  DAGScheduler - Submitting ResultStage 289 (MapPartitionsRDD[1455] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:58.518 INFO  MemoryStore - Block broadcast_609 stored as values in memory (estimated size 426.2 KiB, free 1916.7 MiB)
20:33:58.519 INFO  MemoryStore - Block broadcast_609_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1916.6 MiB)
20:33:58.519 INFO  BlockManagerInfo - Added broadcast_609_piece0 in memory on localhost:45281 (size: 153.7 KiB, free: 1919.3 MiB)
20:33:58.519 INFO  SparkContext - Created broadcast 609 from broadcast at DAGScheduler.scala:1580
20:33:58.520 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 289 (MapPartitionsRDD[1455] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:33:58.520 INFO  TaskSchedulerImpl - Adding task set 289.0 with 1 tasks resource profile 0
20:33:58.520 INFO  TaskSetManager - Starting task 0.0 in stage 289.0 (TID 376) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7901 bytes) 
20:33:58.520 INFO  Executor - Running task 0.0 in stage 289.0 (TID 376)
20:33:58.550 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/engine/CEUTrio.HiSeq.WGS.b37.NA12878.snippet_with_unmapped.bam:0+2184
20:33:58.554 INFO  Executor - Finished task 0.0 in stage 289.0 (TID 376). 989 bytes result sent to driver
20:33:58.554 INFO  TaskSetManager - Finished task 0.0 in stage 289.0 (TID 376) in 34 ms on localhost (executor driver) (1/1)
20:33:58.554 INFO  TaskSchedulerImpl - Removed TaskSet 289.0, whose tasks have all completed, from pool 
20:33:58.555 INFO  DAGScheduler - ResultStage 289 (count at ReadsSparkSourceUnitTest.java:286) finished in 0.054 s
20:33:58.555 INFO  DAGScheduler - Job 225 is finished. Cancelling potential speculative or zombie tasks for this job
20:33:58.555 INFO  TaskSchedulerImpl - Killing all running tasks in stage 289: Stage finished
20:33:58.555 INFO  DAGScheduler - Job 225 finished: count at ReadsSparkSourceUnitTest.java:286, took 0.054046 s
20:33:58.558 INFO  MemoryStore - Block broadcast_610 stored as values in memory (estimated size 298.0 KiB, free 1916.3 MiB)
20:33:58.565 INFO  BlockManagerInfo - Removed broadcast_604_piece0 on localhost:45281 in memory (size: 222.0 B, free: 1919.3 MiB)
20:33:58.566 INFO  BlockManagerInfo - Removed broadcast_602_piece0 on localhost:45281 in memory (size: 50.3 KiB, free: 1919.3 MiB)
20:33:58.566 INFO  BlockManagerInfo - Removed broadcast_608_piece0 on localhost:45281 in memory (size: 50.3 KiB, free: 1919.4 MiB)
20:33:58.566 INFO  BlockManagerInfo - Removed broadcast_607_piece0 on localhost:45281 in memory (size: 212.0 B, free: 1919.4 MiB)
20:33:58.567 INFO  BlockManagerInfo - Removed broadcast_603_piece0 on localhost:45281 in memory (size: 153.6 KiB, free: 1919.5 MiB)
20:33:58.567 INFO  BlockManagerInfo - Removed broadcast_605_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1919.5 MiB)
20:33:58.567 INFO  BlockManagerInfo - Removed broadcast_606_piece0 on localhost:45281 in memory (size: 153.7 KiB, free: 1919.7 MiB)
20:33:58.568 INFO  BlockManagerInfo - Removed broadcast_609_piece0 on localhost:45281 in memory (size: 153.7 KiB, free: 1919.8 MiB)
20:33:58.571 INFO  MemoryStore - Block broadcast_610_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.9 MiB)
20:33:58.571 INFO  BlockManagerInfo - Added broadcast_610_piece0 in memory on localhost:45281 (size: 50.3 KiB, free: 1919.8 MiB)
20:33:58.571 INFO  SparkContext - Created broadcast 610 from newAPIHadoopFile at PathSplitSource.java:96
20:33:58.593 INFO  MemoryStore - Block broadcast_611 stored as values in memory (estimated size 298.2 KiB, free 1918.6 MiB)
20:33:58.599 INFO  MemoryStore - Block broadcast_611_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.6 MiB)
20:33:58.599 INFO  BlockManagerInfo - Added broadcast_611_piece0 in memory on localhost:45281 (size: 50.3 KiB, free: 1919.8 MiB)
20:33:58.599 INFO  SparkContext - Created broadcast 611 from newAPIHadoopFile at PathSplitSource.java:96
20:33:58.619 INFO  FileInputFormat - Total input files to process : 1
20:33:58.619 INFO  FileInputFormat - Total input files to process : 1
20:33:58.631 INFO  MiniDFSCluster - starting cluster: numNameNodes=1, numDataNodes=1
20:33:58.632 INFO  NameNode - Formatting using clusterid: testClusterID
20:33:58.632 INFO  FSEditLog - Edit logging is async:true
20:33:58.639 INFO  FSNamesystem - KeyProvider: null
20:33:58.639 INFO  FSNamesystem - fsLock is fair: true
20:33:58.639 INFO  FSNamesystem - Detailed lock hold time metrics enabled: false
20:33:58.639 INFO  FSNamesystem - fsOwner                = runner (auth:SIMPLE)
20:33:58.639 INFO  FSNamesystem - supergroup             = supergroup
20:33:58.639 INFO  FSNamesystem - isPermissionEnabled    = true
20:33:58.639 INFO  FSNamesystem - isStoragePolicyEnabled = true
20:33:58.639 INFO  FSNamesystem - HA Enabled: false
20:33:58.639 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:33:58.639 INFO  DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
20:33:58.639 INFO  DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
20:33:58.639 INFO  BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
20:33:58.639 INFO  BlockManager - The block deletion will start around 2025 Jul 15 20:33:58
20:33:58.639 INFO  GSet - Computing capacity for map BlocksMap
20:33:58.639 INFO  GSet - VM type       = 64-bit
20:33:58.639 INFO  GSet - 2.0% max memory 3.4 GB = 70 MB
20:33:58.639 INFO  GSet - capacity      = 2^23 = 8388608 entries
20:33:58.641 INFO  BlockManager - Storage policy satisfier is disabled
20:33:58.641 INFO  BlockManager - dfs.block.access.token.enable = false
20:33:58.641 INFO  BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
20:33:58.641 INFO  BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
20:33:58.641 INFO  BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
20:33:58.641 INFO  BlockManager - defaultReplication         = 1
20:33:58.641 INFO  BlockManager - maxReplication             = 512
20:33:58.641 INFO  BlockManager - minReplication             = 1
20:33:58.641 INFO  BlockManager - maxReplicationStreams      = 2
20:33:58.641 INFO  BlockManager - redundancyRecheckInterval  = 3000ms
20:33:58.641 INFO  BlockManager - encryptDataTransfer        = false
20:33:58.641 INFO  BlockManager - maxNumBlocksToLog          = 1000
20:33:58.641 INFO  GSet - Computing capacity for map INodeMap
20:33:58.641 INFO  GSet - VM type       = 64-bit
20:33:58.641 INFO  GSet - 1.0% max memory 3.4 GB = 35 MB
20:33:58.641 INFO  GSet - capacity      = 2^22 = 4194304 entries
20:33:58.645 INFO  BlockManagerInfo - Removed broadcast_611_piece0 on localhost:45281 in memory (size: 50.3 KiB, free: 1919.8 MiB)
20:33:58.645 INFO  BlockManagerInfo - Removed broadcast_610_piece0 on localhost:45281 in memory (size: 50.3 KiB, free: 1919.8 MiB)
20:33:58.645 INFO  FSDirectory - ACLs enabled? true
20:33:58.645 INFO  FSDirectory - POSIX ACL inheritance enabled? true
20:33:58.645 INFO  FSDirectory - XAttrs enabled? true
20:33:58.645 INFO  NameNode - Caching file names occurring more than 10 times
20:33:58.645 INFO  SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
20:33:58.645 INFO  SnapshotManager - SkipList is disabled
20:33:58.645 INFO  GSet - Computing capacity for map cachedBlocks
20:33:58.645 INFO  GSet - VM type       = 64-bit
20:33:58.645 INFO  GSet - 0.25% max memory 3.4 GB = 8.8 MB
20:33:58.645 INFO  GSet - capacity      = 2^20 = 1048576 entries
20:33:58.648 INFO  TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
20:33:58.648 INFO  TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
20:33:58.648 INFO  TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
20:33:58.648 INFO  FSNamesystem - Retry cache on namenode is enabled
20:33:58.648 INFO  FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
20:33:58.648 INFO  GSet - Computing capacity for map NameNodeRetryCache
20:33:58.648 INFO  GSet - VM type       = 64-bit
20:33:58.648 INFO  GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
20:33:58.648 INFO  GSet - capacity      = 2^17 = 131072 entries
20:33:58.649 INFO  FSImage - Allocated new BlockPoolId: BP-179272973-10.1.0.111-1752611638649
20:33:58.650 INFO  Storage - Storage directory /tmp/minicluster_storage3095298801961576547/name-0-1 has been successfully formatted.
20:33:58.652 INFO  Storage - Storage directory /tmp/minicluster_storage3095298801961576547/name-0-2 has been successfully formatted.
20:33:58.665 INFO  FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage3095298801961576547/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression
20:33:58.665 INFO  FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage3095298801961576547/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression
20:33:58.670 INFO  FSImageFormatProtobuf - Image file /tmp/minicluster_storage3095298801961576547/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
20:33:58.670 INFO  FSImageFormatProtobuf - Image file /tmp/minicluster_storage3095298801961576547/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
20:33:58.672 INFO  NNStorageRetentionManager - Going to retain 1 images with txid >= 0
20:33:58.680 INFO  FSNamesystem - Stopping services started for active state
20:33:58.680 INFO  FSNamesystem - Stopping services started for standby state
20:33:58.681 INFO  NameNode - createNameNode []
20:33:58.682 WARN  MetricsConfig - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
20:33:58.683 INFO  MetricsSystemImpl - Scheduled Metric snapshot period at 10 second(s).
20:33:58.683 INFO  MetricsSystemImpl - NameNode metrics system started
20:33:58.683 INFO  NameNodeUtils - fs.defaultFS is hdfs://127.0.0.1:0
20:33:58.698 INFO  JvmPauseMonitor - Starting JVM pause monitor
20:33:58.698 INFO  DFSUtil - Filter initializers set : org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
20:33:58.698 INFO  DFSUtil - Starting Web-server for hdfs at: http://localhost:0
20:33:58.699 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
20:33:58.699 WARN  HttpRequestLog - Jetty request log can only be enabled using Log4j
20:33:58.700 INFO  HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
20:33:58.700 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
20:33:58.700 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
20:33:58.701 INFO  HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context hdfs
20:33:58.701 INFO  HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context static
20:33:58.701 INFO  HttpServer2 - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
20:33:58.702 INFO  HttpServer2 - Jetty bound to port 37849
20:33:58.702 INFO  Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
20:33:58.703 INFO  session - DefaultSessionIdManager workerName=node0
20:33:58.703 INFO  session - No SessionScavenger set, using defaults
20:33:58.703 INFO  session - node0 Scavenging every 600000ms
20:33:58.703 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
20:33:58.703 INFO  ContextHandler - Started o.e.j.s.ServletContextHandler@4e7b9be{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
20:33:58.742 INFO  BlockManagerInfo - Removed broadcast_594_piece0 on localhost:45281 in memory (size: 103.6 KiB, free: 1920.0 MiB)
20:33:58.743 INFO  BlockManagerInfo - Removed broadcast_593_piece0 on localhost:45281 in memory (size: 50.2 KiB, free: 1920.0 MiB)
20:33:58.743 INFO  BlockManagerInfo - Removed broadcast_592_piece0 on localhost:45281 in memory (size: 129.0 B, free: 1920.0 MiB)
20:33:58.825 INFO  ContextHandler - Started o.e.j.w.WebAppContext@747c529{hdfs,/,file:///tmp/jetty-localhost-37849-hadoop-hdfs-3_3_6-tests_jar-_-any-9279731431465767518/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
20:33:58.826 INFO  AbstractConnector - Started ServerConnector@2fbf01ba{HTTP/1.1, (http/1.1)}{localhost:37849}
20:33:58.826 INFO  Server - Started @74576ms
20:33:58.826 INFO  FSEditLog - Edit logging is async:true
20:33:58.835 INFO  FSNamesystem - KeyProvider: null
20:33:58.835 INFO  FSNamesystem - fsLock is fair: true
20:33:58.836 INFO  FSNamesystem - Detailed lock hold time metrics enabled: false
20:33:58.836 INFO  FSNamesystem - fsOwner                = runner (auth:SIMPLE)
20:33:58.836 INFO  FSNamesystem - supergroup             = supergroup
20:33:58.836 INFO  FSNamesystem - isPermissionEnabled    = true
20:33:58.836 INFO  FSNamesystem - isStoragePolicyEnabled = true
20:33:58.836 INFO  FSNamesystem - HA Enabled: false
20:33:58.836 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:33:58.836 INFO  DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
20:33:58.836 INFO  DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
20:33:58.836 INFO  BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
20:33:58.836 INFO  BlockManager - The block deletion will start around 2025 Jul 15 20:33:58
20:33:58.836 INFO  GSet - Computing capacity for map BlocksMap
20:33:58.836 INFO  GSet - VM type       = 64-bit
20:33:58.836 INFO  GSet - 2.0% max memory 3.4 GB = 70 MB
20:33:58.836 INFO  GSet - capacity      = 2^23 = 8388608 entries
20:33:58.838 INFO  BlockManager - Storage policy satisfier is disabled
20:33:58.838 INFO  BlockManager - dfs.block.access.token.enable = false
20:33:58.838 INFO  BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
20:33:58.838 INFO  BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
20:33:58.838 INFO  BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
20:33:58.838 INFO  BlockManager - defaultReplication         = 1
20:33:58.838 INFO  BlockManager - maxReplication             = 512
20:33:58.838 INFO  BlockManager - minReplication             = 1
20:33:58.838 INFO  BlockManager - maxReplicationStreams      = 2
20:33:58.838 INFO  BlockManager - redundancyRecheckInterval  = 3000ms
20:33:58.838 INFO  BlockManager - encryptDataTransfer        = false
20:33:58.838 INFO  BlockManager - maxNumBlocksToLog          = 1000
20:33:58.839 INFO  GSet - Computing capacity for map INodeMap
20:33:58.839 INFO  GSet - VM type       = 64-bit
20:33:58.839 INFO  GSet - 1.0% max memory 3.4 GB = 35 MB
20:33:58.839 INFO  GSet - capacity      = 2^22 = 4194304 entries
20:33:58.839 INFO  FSDirectory - ACLs enabled? true
20:33:58.839 INFO  FSDirectory - POSIX ACL inheritance enabled? true
20:33:58.839 INFO  FSDirectory - XAttrs enabled? true
20:33:58.839 INFO  NameNode - Caching file names occurring more than 10 times
20:33:58.839 INFO  SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
20:33:58.839 INFO  SnapshotManager - SkipList is disabled
20:33:58.840 INFO  GSet - Computing capacity for map cachedBlocks
20:33:58.840 INFO  GSet - VM type       = 64-bit
20:33:58.840 INFO  GSet - 0.25% max memory 3.4 GB = 8.8 MB
20:33:58.840 INFO  GSet - capacity      = 2^20 = 1048576 entries
20:33:58.840 INFO  TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
20:33:58.840 INFO  TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
20:33:58.840 INFO  TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
20:33:58.840 INFO  FSNamesystem - Retry cache on namenode is enabled
20:33:58.840 INFO  FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
20:33:58.840 INFO  GSet - Computing capacity for map NameNodeRetryCache
20:33:58.840 INFO  GSet - VM type       = 64-bit
20:33:58.840 INFO  GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
20:33:58.840 INFO  GSet - capacity      = 2^17 = 131072 entries
20:33:58.841 INFO  Storage - Lock on /tmp/minicluster_storage3095298801961576547/name-0-1/in_use.lock acquired by nodename 3077@pkrvmq0rgcvqdmg
20:33:58.842 INFO  Storage - Lock on /tmp/minicluster_storage3095298801961576547/name-0-2/in_use.lock acquired by nodename 3077@pkrvmq0rgcvqdmg
20:33:58.843 INFO  FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage3095298801961576547/name-0-1/current
20:33:58.843 INFO  FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage3095298801961576547/name-0-2/current
20:33:58.843 INFO  FSImage - No edit log streams selected.
20:33:58.843 INFO  FSImage - Planning to load image: FSImageFile(file=/tmp/minicluster_storage3095298801961576547/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
20:33:58.843 INFO  FSImageFormatPBINode - Loading 1 INodes.
20:33:58.844 INFO  FSImageFormatPBINode - Successfully loaded 1 inodes
20:33:58.844 INFO  FSImageFormatPBINode - Completed update blocks map and name cache, total waiting duration 0ms.
20:33:58.844 INFO  FSImageFormatProtobuf - Loaded FSImage in 0 seconds.
20:33:58.844 INFO  FSImage - Loaded image for txid 0 from /tmp/minicluster_storage3095298801961576547/name-0-1/current/fsimage_0000000000000000000
20:33:58.844 INFO  FSNamesystem - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
20:33:58.844 INFO  FSEditLog - Starting log segment at 1
20:33:58.847 INFO  NameCache - initialized with 0 entries 0 lookups
20:33:58.847 INFO  FSNamesystem - Finished loading FSImage in 7 msecs
20:33:58.847 INFO  NameNode - RPC server is binding to localhost:0
20:33:58.847 INFO  NameNode - Enable NameNode state context:false
20:33:58.847 INFO  CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
20:33:58.847 INFO  Server - Listener at localhost:33947
20:33:58.848 INFO  Server - Starting Socket Reader #1 for port 0
20:33:58.849 INFO  NameNode - Clients are to use localhost:33947 to access this namenode/service.
20:33:58.850 INFO  FSNamesystem - Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans.
20:33:58.859 INFO  LeaseManager - Number of blocks under construction: 0
20:33:58.860 INFO  DatanodeAdminDefaultMonitor - Initialized the Default Decommission and Maintenance monitor
20:33:58.860 INFO  BlockManager - Start MarkedDeleteBlockScrubber thread
20:33:58.860 INFO  BlockManager - initializing replication queues
20:33:58.861 INFO  StateChange - STATE* Leaving safe mode after 0 secs
20:33:58.861 INFO  StateChange - STATE* Network topology has 0 racks and 0 datanodes
20:33:58.861 INFO  StateChange - STATE* UnderReplicatedBlocks has 0 blocks
20:33:58.863 INFO  Server - IPC Server Responder: starting
20:33:58.864 INFO  Server - IPC Server listener on 0: starting
20:33:58.865 INFO  NameNode - NameNode RPC up at: localhost/127.0.0.1:33947
20:33:58.865 INFO  BlockManager - Total number of blocks            = 0
20:33:58.865 WARN  MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
20:33:58.865 INFO  BlockManager - Number of invalid blocks          = 0
20:33:58.865 INFO  BlockManager - Number of under-replicated blocks = 0
20:33:58.865 INFO  BlockManager - Number of  over-replicated blocks = 0
20:33:58.865 INFO  BlockManager - Number of blocks being written    = 0
20:33:58.865 INFO  StateChange - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 4 msec
20:33:58.865 INFO  FSNamesystem - Starting services required for active state
20:33:58.865 INFO  FSDirectory - Initializing quota with 12 thread(s)
20:33:58.865 INFO  FSDirectory - Quota initialization completed in 0 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
20:33:58.866 INFO  CacheReplicationMonitor - Starting CacheReplicationMonitor with interval 30000 milliseconds
20:33:58.866 INFO  MiniDFSCluster - Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/tmp/minicluster_storage3095298801961576547/data/data1,[DISK]file:/tmp/minicluster_storage3095298801961576547/data/data2
20:33:58.867 INFO  ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage3095298801961576547/data/data1
20:33:58.867 INFO  ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage3095298801961576547/data/data2
20:33:58.877 INFO  MetricsSystemImpl - DataNode metrics system started (again)
20:33:58.877 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:33:58.877 INFO  BlockScanner - Initialized block scanner with targetBytesPerSec 1048576
20:33:58.877 INFO  DataNode - Configured hostname is 127.0.0.1
20:33:58.877 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:33:58.878 INFO  DataNode - Starting DataNode with maxLockedMemory = 0
20:33:58.878 INFO  DataNode - Opened streaming server at /127.0.0.1:44703
20:33:58.878 INFO  DataNode - Balancing bandwidth is 104857600 bytes/s
20:33:58.878 INFO  DataNode - Number threads for balancing is 100
20:33:58.879 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
20:33:58.880 WARN  HttpRequestLog - Jetty request log can only be enabled using Log4j
20:33:58.880 INFO  HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
20:33:58.881 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
20:33:58.881 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
20:33:58.881 INFO  HttpServer2 - Jetty bound to port 45955
20:33:58.881 INFO  Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
20:33:58.882 INFO  session - DefaultSessionIdManager workerName=node0
20:33:58.882 INFO  session - No SessionScavenger set, using defaults
20:33:58.882 INFO  session - node0 Scavenging every 660000ms
20:33:58.882 INFO  ContextHandler - Started o.e.j.s.ServletContextHandler@6c3a42a5{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
20:33:58.968 INFO  ContextHandler - Started o.e.j.w.WebAppContext@72dff228{datanode,/,file:///tmp/jetty-localhost-45955-hadoop-hdfs-3_3_6-tests_jar-_-any-6492398764976151124/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
20:33:58.968 INFO  AbstractConnector - Started ServerConnector@62c5b46c{HTTP/1.1, (http/1.1)}{localhost:45955}
20:33:58.968 INFO  Server - Started @74718ms
20:33:58.969 WARN  DatanodeHttpServer - Got null for restCsrfPreventionFilter - will not do any filtering.
20:33:58.970 INFO  DatanodeHttpServer - Listening HTTP traffic on /127.0.0.1:45227
20:33:58.970 INFO  DataNode - dnUserName = runner
20:33:58.970 INFO  DataNode - supergroup = supergroup
20:33:58.970 INFO  JvmPauseMonitor - Starting JVM pause monitor
20:33:58.970 INFO  CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
20:33:58.970 INFO  Server - Listener at localhost:40257
20:33:58.971 INFO  Server - Starting Socket Reader #1 for port 0
20:33:58.972 INFO  DataNode - Opened IPC server at /127.0.0.1:40257
20:33:58.986 INFO  DataNode - Refresh request received for nameservices: null
20:33:58.986 INFO  DataNode - Starting BPOfferServices for nameservices: <default>
20:33:58.987 WARN  MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
20:33:58.987 INFO  DataNode - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:33947 starting to offer service
20:33:58.987 INFO  Server - IPC Server Responder: starting
20:33:58.987 INFO  Server - IPC Server listener on 0: starting
20:33:58.988 INFO  DataNode - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:33947
20:33:58.988 INFO  Storage - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2)
20:33:58.989 INFO  Storage - Lock on /tmp/minicluster_storage3095298801961576547/data/data1/in_use.lock acquired by nodename 3077@pkrvmq0rgcvqdmg
20:33:58.989 INFO  Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage3095298801961576547/data/data1 is not formatted for namespace 2131214705. Formatting...
20:33:58.989 INFO  Storage - Generated new storageID DS-d0715a24-53c7-4144-ba1a-12a8c244b014 for directory /tmp/minicluster_storage3095298801961576547/data/data1 
20:33:58.990 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=datanodeReport	src=null	dst=null	perm=null	proto=rpc
20:33:58.990 INFO  MiniDFSCluster - dnInfo.length != numDataNodes
20:33:58.990 INFO  MiniDFSCluster - Waiting for cluster to become active
20:33:58.991 INFO  Storage - Lock on /tmp/minicluster_storage3095298801961576547/data/data2/in_use.lock acquired by nodename 3077@pkrvmq0rgcvqdmg
20:33:58.991 INFO  Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage3095298801961576547/data/data2 is not formatted for namespace 2131214705. Formatting...
20:33:58.991 INFO  Storage - Generated new storageID DS-83a5a5ed-9040-48ad-94a7-0537269bbf5c for directory /tmp/minicluster_storage3095298801961576547/data/data2 
20:33:59.009 INFO  Storage - Analyzing storage directories for bpid BP-179272973-10.1.0.111-1752611638649
20:33:59.009 INFO  Storage - Locking is disabled for /tmp/minicluster_storage3095298801961576547/data/data1/current/BP-179272973-10.1.0.111-1752611638649
20:33:59.009 INFO  Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage3095298801961576547/data/data1 and block pool id BP-179272973-10.1.0.111-1752611638649 is not formatted. Formatting ...
20:33:59.009 INFO  Storage - Formatting block pool BP-179272973-10.1.0.111-1752611638649 directory /tmp/minicluster_storage3095298801961576547/data/data1/current/BP-179272973-10.1.0.111-1752611638649/current
20:33:59.026 INFO  Storage - Analyzing storage directories for bpid BP-179272973-10.1.0.111-1752611638649
20:33:59.026 INFO  Storage - Locking is disabled for /tmp/minicluster_storage3095298801961576547/data/data2/current/BP-179272973-10.1.0.111-1752611638649
20:33:59.026 INFO  Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage3095298801961576547/data/data2 and block pool id BP-179272973-10.1.0.111-1752611638649 is not formatted. Formatting ...
20:33:59.027 INFO  Storage - Formatting block pool BP-179272973-10.1.0.111-1752611638649 directory /tmp/minicluster_storage3095298801961576547/data/data2/current/BP-179272973-10.1.0.111-1752611638649/current
20:33:59.028 INFO  DataNode - Setting up storage: nsid=2131214705;bpid=BP-179272973-10.1.0.111-1752611638649;lv=-57;nsInfo=lv=-66;cid=testClusterID;nsid=2131214705;c=1752611638649;bpid=BP-179272973-10.1.0.111-1752611638649;dnuuid=null
20:33:59.029 INFO  DataNode - Generated and persisted new Datanode UUID cf9ca3b8-cd19-4ca1-8f4b-6d6bb73bf804
20:33:59.029 INFO  FsDatasetImpl - The datanode lock is a read write lock
20:33:59.030 INFO  FsDatasetImpl - Added new volume: DS-d0715a24-53c7-4144-ba1a-12a8c244b014
20:33:59.030 INFO  FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage3095298801961576547/data/data1, StorageType: DISK
20:33:59.031 INFO  FsDatasetImpl - Added new volume: DS-83a5a5ed-9040-48ad-94a7-0537269bbf5c
20:33:59.031 INFO  FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage3095298801961576547/data/data2, StorageType: DISK
20:33:59.031 INFO  MemoryMappableBlockLoader - Initializing cache loader: MemoryMappableBlockLoader.
20:33:59.031 INFO  FsDatasetImpl - Registered FSDatasetState MBean
20:33:59.032 INFO  FsDatasetImpl - Adding block pool BP-179272973-10.1.0.111-1752611638649
20:33:59.032 INFO  FsDatasetImpl - Scanning block pool BP-179272973-10.1.0.111-1752611638649 on volume /tmp/minicluster_storage3095298801961576547/data/data1...
20:33:59.032 INFO  FsDatasetImpl - Scanning block pool BP-179272973-10.1.0.111-1752611638649 on volume /tmp/minicluster_storage3095298801961576547/data/data2...
20:33:59.032 WARN  FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage3095298801961576547/data/data1/current/BP-179272973-10.1.0.111-1752611638649/current, will proceed with Du for space computation calculation, 
20:33:59.032 WARN  FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage3095298801961576547/data/data2/current/BP-179272973-10.1.0.111-1752611638649/current, will proceed with Du for space computation calculation, 
20:33:59.044 INFO  FsDatasetImpl - Time taken to scan block pool BP-179272973-10.1.0.111-1752611638649 on /tmp/minicluster_storage3095298801961576547/data/data1: 12ms
20:33:59.044 INFO  FsDatasetImpl - Time taken to scan block pool BP-179272973-10.1.0.111-1752611638649 on /tmp/minicluster_storage3095298801961576547/data/data2: 12ms
20:33:59.044 INFO  FsDatasetImpl - Total time to scan all replicas for block pool BP-179272973-10.1.0.111-1752611638649: 12ms
20:33:59.045 INFO  FsDatasetImpl - Adding replicas to map for block pool BP-179272973-10.1.0.111-1752611638649 on volume /tmp/minicluster_storage3095298801961576547/data/data1...
20:33:59.045 INFO  BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage3095298801961576547/data/data1/current/BP-179272973-10.1.0.111-1752611638649/current/replicas doesn't exist 
20:33:59.045 INFO  FsDatasetImpl - Adding replicas to map for block pool BP-179272973-10.1.0.111-1752611638649 on volume /tmp/minicluster_storage3095298801961576547/data/data2...
20:33:59.045 INFO  BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage3095298801961576547/data/data2/current/BP-179272973-10.1.0.111-1752611638649/current/replicas doesn't exist 
20:33:59.045 INFO  FsDatasetImpl - Time to add replicas to map for block pool BP-179272973-10.1.0.111-1752611638649 on volume /tmp/minicluster_storage3095298801961576547/data/data1: 0ms
20:33:59.045 INFO  FsDatasetImpl - Time to add replicas to map for block pool BP-179272973-10.1.0.111-1752611638649 on volume /tmp/minicluster_storage3095298801961576547/data/data2: 0ms
20:33:59.045 INFO  FsDatasetImpl - Total time to add all replicas to map for block pool BP-179272973-10.1.0.111-1752611638649: 1ms
20:33:59.045 INFO  ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage3095298801961576547/data/data1
20:33:59.045 INFO  DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage3095298801961576547/data/data1
20:33:59.045 INFO  ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage3095298801961576547/data/data2
20:33:59.045 INFO  DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage3095298801961576547/data/data2
20:33:59.046 INFO  VolumeScanner - Now scanning bpid BP-179272973-10.1.0.111-1752611638649 on volume /tmp/minicluster_storage3095298801961576547/data/data2
20:33:59.046 INFO  VolumeScanner - Now scanning bpid BP-179272973-10.1.0.111-1752611638649 on volume /tmp/minicluster_storage3095298801961576547/data/data1
20:33:59.046 WARN  DirectoryScanner - dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value above 1000 ms/sec. Assuming default value of -1
20:33:59.046 INFO  DirectoryScanner - Periodic Directory Tree Verification scan starting in 8928113ms with interval of 21600000ms and throttle limit of -1ms/s
20:33:59.046 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3095298801961576547/data/data1, DS-d0715a24-53c7-4144-ba1a-12a8c244b014): finished scanning block pool BP-179272973-10.1.0.111-1752611638649
20:33:59.046 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3095298801961576547/data/data2, DS-83a5a5ed-9040-48ad-94a7-0537269bbf5c): finished scanning block pool BP-179272973-10.1.0.111-1752611638649
20:33:59.046 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3095298801961576547/data/data1, DS-d0715a24-53c7-4144-ba1a-12a8c244b014): no suitable block pools found to scan.  Waiting 1814400000 ms.
20:33:59.046 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3095298801961576547/data/data2, DS-83a5a5ed-9040-48ad-94a7-0537269bbf5c): no suitable block pools found to scan.  Waiting 1814400000 ms.
20:33:59.046 INFO  DataNode - Block pool BP-179272973-10.1.0.111-1752611638649 (Datanode Uuid cf9ca3b8-cd19-4ca1-8f4b-6d6bb73bf804) service to localhost/127.0.0.1:33947 beginning handshake with NN
20:33:59.047 INFO  StateChange - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:44703, datanodeUuid=cf9ca3b8-cd19-4ca1-8f4b-6d6bb73bf804, infoPort=45227, infoSecurePort=0, ipcPort=40257, storageInfo=lv=-57;cid=testClusterID;nsid=2131214705;c=1752611638649) storage cf9ca3b8-cd19-4ca1-8f4b-6d6bb73bf804
20:33:59.047 INFO  NetworkTopology - Adding a new node: /default-rack/127.0.0.1:44703
20:33:59.047 INFO  BlockReportLeaseManager - Registered DN cf9ca3b8-cd19-4ca1-8f4b-6d6bb73bf804 (127.0.0.1:44703).
20:33:59.047 INFO  DataNode - Block pool BP-179272973-10.1.0.111-1752611638649 (Datanode Uuid cf9ca3b8-cd19-4ca1-8f4b-6d6bb73bf804) service to localhost/127.0.0.1:33947 successfully registered with NN
20:33:59.047 INFO  DataNode - For namenode localhost/127.0.0.1:33947 using BLOCKREPORT_INTERVAL of 21600000msecs CACHEREPORT_INTERVAL of 10000msecs Initial delay: 0msecs; heartBeatInterval=3000
20:33:59.047 INFO  DataNode - Starting IBR Task Handler.
20:33:59.048 INFO  DatanodeDescriptor - Adding new storage ID DS-d0715a24-53c7-4144-ba1a-12a8c244b014 for DN 127.0.0.1:44703
20:33:59.048 INFO  DatanodeDescriptor - Adding new storage ID DS-83a5a5ed-9040-48ad-94a7-0537269bbf5c for DN 127.0.0.1:44703
20:33:59.048 INFO  DataNode - After receiving heartbeat response, updating state of namenode localhost:33947 to active
20:33:59.048 INFO  BlockStateChange - BLOCK* processReport 0x86f85bd8d712ef66 with lease ID 0xdf742a1b72547459: Processing first storage report for DS-83a5a5ed-9040-48ad-94a7-0537269bbf5c from datanode DatanodeRegistration(127.0.0.1:44703, datanodeUuid=cf9ca3b8-cd19-4ca1-8f4b-6d6bb73bf804, infoPort=45227, infoSecurePort=0, ipcPort=40257, storageInfo=lv=-57;cid=testClusterID;nsid=2131214705;c=1752611638649)
20:33:59.048 INFO  BlockStateChange - BLOCK* processReport 0x86f85bd8d712ef66 with lease ID 0xdf742a1b72547459: from storage DS-83a5a5ed-9040-48ad-94a7-0537269bbf5c node DatanodeRegistration(127.0.0.1:44703, datanodeUuid=cf9ca3b8-cd19-4ca1-8f4b-6d6bb73bf804, infoPort=45227, infoSecurePort=0, ipcPort=40257, storageInfo=lv=-57;cid=testClusterID;nsid=2131214705;c=1752611638649), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0
20:33:59.048 INFO  BlockStateChange - BLOCK* processReport 0x86f85bd8d712ef66 with lease ID 0xdf742a1b72547459: Processing first storage report for DS-d0715a24-53c7-4144-ba1a-12a8c244b014 from datanode DatanodeRegistration(127.0.0.1:44703, datanodeUuid=cf9ca3b8-cd19-4ca1-8f4b-6d6bb73bf804, infoPort=45227, infoSecurePort=0, ipcPort=40257, storageInfo=lv=-57;cid=testClusterID;nsid=2131214705;c=1752611638649)
20:33:59.048 INFO  BlockStateChange - BLOCK* processReport 0x86f85bd8d712ef66 with lease ID 0xdf742a1b72547459: from storage DS-d0715a24-53c7-4144-ba1a-12a8c244b014 node DatanodeRegistration(127.0.0.1:44703, datanodeUuid=cf9ca3b8-cd19-4ca1-8f4b-6d6bb73bf804, infoPort=45227, infoSecurePort=0, ipcPort=40257, storageInfo=lv=-57;cid=testClusterID;nsid=2131214705;c=1752611638649), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
20:33:59.049 INFO  DataNode - Successfully sent block report 0x86f85bd8d712ef66 with lease ID 0xdf742a1b72547459 to namenode: localhost/127.0.0.1:33947,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 0 msecs to generate and 1 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
20:33:59.049 INFO  DataNode - Got finalize command for block pool BP-179272973-10.1.0.111-1752611638649
20:33:59.091 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=datanodeReport	src=null	dst=null	perm=null	proto=rpc
20:33:59.091 INFO  MiniDFSCluster - Cluster is active
20:33:59.093 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
20:33:59.094 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/user/runner/hdfs.bam	dst=null	perm=runner:supergroup:rw-r--r--	proto=rpc
20:33:59.095 INFO  StateChange - BLOCK* allocate blk_1073741825_1001, replicas=127.0.0.1:44703 for /user/runner/hdfs.bam
20:33:59.096 INFO  DataNode - Receiving BP-179272973-10.1.0.111-1752611638649:blk_1073741825_1001 src: /127.0.0.1:54052 dest: /127.0.0.1:44703
20:33:59.097 INFO  clienttrace - src: /127.0.0.1:54052, dest: /127.0.0.1:44703, bytes: 2396, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1175340596_1, offset: 0, srvID: cf9ca3b8-cd19-4ca1-8f4b-6d6bb73bf804, blockid: BP-179272973-10.1.0.111-1752611638649:blk_1073741825_1001, duration(ns): 406418
20:33:59.097 INFO  DataNode - PacketResponder: BP-179272973-10.1.0.111-1752611638649:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating
20:33:59.098 INFO  FSNamesystem - BLOCK* blk_1073741825_1001 is COMMITTED but not COMPLETE(numNodes= 0 <  minimum = 1) in file /user/runner/hdfs.bam
20:33:59.499 INFO  StateChange - DIR* completeFile: /user/runner/hdfs.bam is closed by DFSClient_NONMAPREDUCE_-1175340596_1
20:33:59.499 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
20:33:59.500 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/user/runner/hdfs.bai	dst=null	perm=runner:supergroup:rw-r--r--	proto=rpc
20:33:59.502 INFO  StateChange - BLOCK* allocate blk_1073741826_1002, replicas=127.0.0.1:44703 for /user/runner/hdfs.bai
20:33:59.502 INFO  DataNode - Receiving BP-179272973-10.1.0.111-1752611638649:blk_1073741826_1002 src: /127.0.0.1:54056 dest: /127.0.0.1:44703
20:33:59.504 INFO  clienttrace - src: /127.0.0.1:54056, dest: /127.0.0.1:44703, bytes: 656, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1175340596_1, offset: 0, srvID: cf9ca3b8-cd19-4ca1-8f4b-6d6bb73bf804, blockid: BP-179272973-10.1.0.111-1752611638649:blk_1073741826_1002, duration(ns): 404482
20:33:59.504 INFO  DataNode - PacketResponder: BP-179272973-10.1.0.111-1752611638649:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating
20:33:59.504 INFO  FSNamesystem - BLOCK* blk_1073741826_1002 is COMMITTED but not COMPLETE(numNodes= 0 <  minimum = 1) in file /user/runner/hdfs.bai
20:33:59.905 INFO  StateChange - DIR* completeFile: /user/runner/hdfs.bai is closed by DFSClient_NONMAPREDUCE_-1175340596_1
20:33:59.907 INFO  MemoryStore - Block broadcast_612 stored as values in memory (estimated size 298.3 KiB, free 1919.7 MiB)
20:33:59.914 INFO  MemoryStore - Block broadcast_612_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
20:33:59.914 INFO  BlockManagerInfo - Added broadcast_612_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1920.0 MiB)
20:33:59.914 INFO  SparkContext - Created broadcast 612 from newAPIHadoopFile at PathSplitSource.java:96
20:33:59.934 INFO  FileInputFormat - Total input files to process : 1
20:33:59.969 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:212
20:33:59.970 INFO  DAGScheduler - Got job 226 (collect at ReadsSparkSourceUnitTest.java:212) with 1 output partitions
20:33:59.970 INFO  DAGScheduler - Final stage: ResultStage 290 (collect at ReadsSparkSourceUnitTest.java:212)
20:33:59.970 INFO  DAGScheduler - Parents of final stage: List()
20:33:59.970 INFO  DAGScheduler - Missing parents: List()
20:33:59.970 INFO  DAGScheduler - Submitting ResultStage 290 (MapPartitionsRDD[1476] at filter at ReadsSparkSource.java:96), which has no missing parents
20:33:59.986 INFO  MemoryStore - Block broadcast_613 stored as values in memory (estimated size 426.4 KiB, free 1919.2 MiB)
20:33:59.988 INFO  MemoryStore - Block broadcast_613_piece0 stored as bytes in memory (estimated size 154.0 KiB, free 1919.1 MiB)
20:33:59.988 INFO  BlockManagerInfo - Added broadcast_613_piece0 in memory on localhost:45281 (size: 154.0 KiB, free: 1919.8 MiB)
20:33:59.988 INFO  SparkContext - Created broadcast 613 from broadcast at DAGScheduler.scala:1580
20:33:59.988 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 290 (MapPartitionsRDD[1476] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:33:59.988 INFO  TaskSchedulerImpl - Adding task set 290.0 with 1 tasks resource profile 0
20:33:59.989 INFO  TaskSetManager - Starting task 0.0 in stage 290.0 (TID 377) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7899 bytes) 
20:33:59.989 INFO  Executor - Running task 0.0 in stage 290.0 (TID 377)
20:34:00.020 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/engine/spark/datasources/ReadsSparkSource/hdfs_file_test.bam:0+2396
20:34:00.023 INFO  Executor - Finished task 0.0 in stage 290.0 (TID 377). 5288 bytes result sent to driver
20:34:00.023 INFO  TaskSetManager - Finished task 0.0 in stage 290.0 (TID 377) in 34 ms on localhost (executor driver) (1/1)
20:34:00.023 INFO  TaskSchedulerImpl - Removed TaskSet 290.0, whose tasks have all completed, from pool 
20:34:00.023 INFO  DAGScheduler - ResultStage 290 (collect at ReadsSparkSourceUnitTest.java:212) finished in 0.053 s
20:34:00.023 INFO  DAGScheduler - Job 226 is finished. Cancelling potential speculative or zombie tasks for this job
20:34:00.023 INFO  TaskSchedulerImpl - Killing all running tasks in stage 290: Stage finished
20:34:00.023 INFO  DAGScheduler - Job 226 finished: collect at ReadsSparkSourceUnitTest.java:212, took 0.054095 s
20:34:00.024 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
20:34:00.025 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
20:34:00.025 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
20:34:00.026 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
20:34:00.026 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam.bai	dst=null	perm=null	proto=rpc
20:34:00.027 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
20:34:00.027 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
20:34:00.027 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
20:34:00.029 WARN  DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
20:34:00.030 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
20:34:00.030 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
20:34:00.030 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam.sbi	dst=null	perm=null	proto=rpc
20:34:00.031 INFO  MemoryStore - Block broadcast_614 stored as values in memory (estimated size 298.2 KiB, free 1918.8 MiB)
20:34:00.038 INFO  MemoryStore - Block broadcast_614_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.8 MiB)
20:34:00.038 INFO  BlockManagerInfo - Added broadcast_614_piece0 in memory on localhost:45281 (size: 50.2 KiB, free: 1919.8 MiB)
20:34:00.038 INFO  SparkContext - Created broadcast 614 from newAPIHadoopFile at PathSplitSource.java:96
20:34:00.063 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
20:34:00.064 INFO  FileInputFormat - Total input files to process : 1
20:34:00.064 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
20:34:00.101 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:213
20:34:00.101 INFO  DAGScheduler - Got job 227 (collect at ReadsSparkSourceUnitTest.java:213) with 1 output partitions
20:34:00.101 INFO  DAGScheduler - Final stage: ResultStage 291 (collect at ReadsSparkSourceUnitTest.java:213)
20:34:00.101 INFO  DAGScheduler - Parents of final stage: List()
20:34:00.101 INFO  DAGScheduler - Missing parents: List()
20:34:00.101 INFO  DAGScheduler - Submitting ResultStage 291 (MapPartitionsRDD[1483] at filter at ReadsSparkSource.java:96), which has no missing parents
20:34:00.118 INFO  MemoryStore - Block broadcast_615 stored as values in memory (estimated size 426.4 KiB, free 1918.3 MiB)
20:34:00.119 INFO  MemoryStore - Block broadcast_615_piece0 stored as bytes in memory (estimated size 154.0 KiB, free 1918.2 MiB)
20:34:00.119 INFO  BlockManagerInfo - Added broadcast_615_piece0 in memory on localhost:45281 (size: 154.0 KiB, free: 1919.6 MiB)
20:34:00.120 INFO  SparkContext - Created broadcast 615 from broadcast at DAGScheduler.scala:1580
20:34:00.120 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 291 (MapPartitionsRDD[1483] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:34:00.120 INFO  TaskSchedulerImpl - Adding task set 291.0 with 1 tasks resource profile 0
20:34:00.120 INFO  TaskSetManager - Starting task 0.0 in stage 291.0 (TID 378) (localhost, executor driver, partition 0, ANY, 7796 bytes) 
20:34:00.120 INFO  Executor - Running task 0.0 in stage 291.0 (TID 378)
20:34:00.150 INFO  NewHadoopRDD - Input split: hdfs://localhost:33947/user/runner/hdfs.bam:0+2396
20:34:00.151 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
20:34:00.151 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
20:34:00.152 WARN  DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
20:34:00.152 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
20:34:00.152 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
20:34:00.153 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam.bai	dst=null	perm=null	proto=rpc
20:34:00.153 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
20:34:00.154 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
20:34:00.154 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
20:34:00.155 WARN  DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
20:34:00.156 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
20:34:00.156 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
20:34:00.156 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
20:34:00.157 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
20:34:00.157 WARN  DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
20:34:00.158 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
20:34:00.158 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
20:34:00.159 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam.bai	dst=null	perm=null	proto=rpc
20:34:00.159 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
20:34:00.159 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
20:34:00.159 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
20:34:00.160 WARN  DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
20:34:00.161 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
20:34:00.161 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
20:34:00.162 INFO  Executor - Finished task 0.0 in stage 291.0 (TID 378). 5288 bytes result sent to driver
20:34:00.162 INFO  TaskSetManager - Finished task 0.0 in stage 291.0 (TID 378) in 42 ms on localhost (executor driver) (1/1)
20:34:00.162 INFO  TaskSchedulerImpl - Removed TaskSet 291.0, whose tasks have all completed, from pool 
20:34:00.162 INFO  DAGScheduler - ResultStage 291 (collect at ReadsSparkSourceUnitTest.java:213) finished in 0.060 s
20:34:00.162 INFO  DAGScheduler - Job 227 is finished. Cancelling potential speculative or zombie tasks for this job
20:34:00.162 INFO  TaskSchedulerImpl - Killing all running tasks in stage 291: Stage finished
20:34:00.163 INFO  DAGScheduler - Job 227 finished: collect at ReadsSparkSourceUnitTest.java:213, took 0.061582 s
20:34:00.163 INFO  MiniDFSCluster - Shutting down the Mini HDFS Cluster
20:34:00.163 INFO  MiniDFSCluster - Shutting down DataNode 0
20:34:00.163 INFO  DirectoryScanner - Shutdown has been called
20:34:00.163 INFO  DataNode - Closing all peers.
20:34:00.163 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3095298801961576547/data/data2, DS-83a5a5ed-9040-48ad-94a7-0537269bbf5c) exiting.
20:34:00.163 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3095298801961576547/data/data1, DS-d0715a24-53c7-4144-ba1a-12a8c244b014) exiting.
20:34:00.166 INFO  ContextHandler - Stopped o.e.j.w.WebAppContext@72dff228{datanode,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
20:34:00.167 INFO  AbstractConnector - Stopped ServerConnector@62c5b46c{HTTP/1.1, (http/1.1)}{localhost:0}
20:34:00.167 INFO  session - node0 Stopped scavenging
20:34:00.167 INFO  ContextHandler - Stopped o.e.j.s.ServletContextHandler@6c3a42a5{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
20:34:00.168 INFO  DataNode - Waiting up to 30 seconds for transfer threads to complete
20:34:00.168 INFO  Server - Stopping server on 40257
20:34:00.168 INFO  Server - Stopping IPC Server listener on 0
20:34:00.168 INFO  Server - Stopping IPC Server Responder
20:34:00.169 ERROR DataNode - Command processor encountered interrupt and exit.
20:34:00.169 WARN  DataNode - Ending command processor service for: Thread[Command processor,5,main]
20:34:00.169 WARN  IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
20:34:00.169 WARN  IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
20:34:00.169 WARN  DataNode - Ending block pool service for: Block pool BP-179272973-10.1.0.111-1752611638649 (Datanode Uuid cf9ca3b8-cd19-4ca1-8f4b-6d6bb73bf804) service to localhost/127.0.0.1:33947
20:34:00.169 INFO  DataNode - Removed Block pool BP-179272973-10.1.0.111-1752611638649 (Datanode Uuid cf9ca3b8-cd19-4ca1-8f4b-6d6bb73bf804)
20:34:00.169 INFO  FsDatasetImpl - Removing block pool BP-179272973-10.1.0.111-1752611638649
20:34:00.170 WARN  CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
20:34:00.170 WARN  CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
20:34:00.170 INFO  FsDatasetAsyncDiskService - Shutting down all async disk service threads
20:34:00.170 INFO  FsDatasetAsyncDiskService - All async disk service threads have been shut down
20:34:00.170 INFO  RamDiskAsyncLazyPersistService - Shutting down all async lazy persist service threads
20:34:00.170 INFO  RamDiskAsyncLazyPersistService - All async lazy persist service threads have been shut down
20:34:00.170 INFO  DataNode - Shutdown complete.
20:34:00.170 INFO  MiniDFSCluster - Shutting down the namenode
20:34:00.170 INFO  FSNamesystem - Stopping services started for active state
20:34:00.170 INFO  FSEditLog - Ending log segment 1, 13
20:34:00.170 INFO  FSNamesystem - NameNodeEditLogRoller was interrupted, exiting
20:34:00.170 INFO  FSNamesystem - LazyPersistFileScrubber was interrupted, exiting
20:34:00.171 INFO  FSEditLog - Number of transactions: 14 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 3 Number of syncs: 12 SyncTimes(ms): 1 1 
20:34:00.171 INFO  FileJournalManager - Finalizing edits file /tmp/minicluster_storage3095298801961576547/name-0-1/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage3095298801961576547/name-0-1/current/edits_0000000000000000001-0000000000000000014
20:34:00.171 INFO  FileJournalManager - Finalizing edits file /tmp/minicluster_storage3095298801961576547/name-0-2/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage3095298801961576547/name-0-2/current/edits_0000000000000000001-0000000000000000014
20:34:00.171 INFO  FSEditLog - FSEditLogAsync was interrupted, exiting
20:34:00.171 INFO  CacheReplicationMonitor - Shutting down CacheReplicationMonitor
20:34:00.171 INFO  Server - Stopping server on 33947
20:34:00.172 INFO  Server - Stopping IPC Server listener on 0
20:34:00.172 INFO  Server - Stopping IPC Server Responder
20:34:00.172 INFO  BlockManager - Stopping RedundancyMonitor.
20:34:00.172 INFO  BlockManager - Stopping MarkedDeleteBlockScrubber.
20:34:00.176 INFO  FSNamesystem - Stopping services started for active state
20:34:00.176 INFO  FSNamesystem - Stopping services started for standby state
20:34:00.177 INFO  ContextHandler - Stopped o.e.j.w.WebAppContext@747c529{hdfs,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
20:34:00.177 INFO  AbstractConnector - Stopped ServerConnector@2fbf01ba{HTTP/1.1, (http/1.1)}{localhost:0}
20:34:00.177 INFO  session - node0 Stopped scavenging
20:34:00.177 INFO  ContextHandler - Stopped o.e.j.s.ServletContextHandler@4e7b9be{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
20:34:00.178 INFO  MetricsSystemImpl - Stopping DataNode metrics system...
20:34:00.178 INFO  MetricsSystemImpl - DataNode metrics system stopped.
20:34:00.178 INFO  MetricsSystemImpl - DataNode metrics system shutdown complete.