Class org.broadinstitute.hellbender.engine.spark.datasources.ReadsSparkSourceUnitTest

28

tests

0

failures

0

ignored

4.441s

duration

100%

successful

Tests

Test Duration Result
loadReadsNonExistentReference 0.001s passed
readsSparkSourceTestLenient[0](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam, null) 0.156s passed
readsSparkSourceTestLenient[1](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram, /home/runner/work/gatk/gatk/src/test/resources/human_g1k_v37.chr17_1Mb.fasta) 0.157s passed
readsSparkSourceTestSilent[0](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam, null) 0.161s passed
readsSparkSourceTestSilent[1](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram, /home/runner/work/gatk/gatk/src/test/resources/human_g1k_v37.chr17_1Mb.fasta) 0.129s passed
readsSparkSourceTestStrict[0](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam, null) 0.028s passed
readsSparkSourceTestStrict[1](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram, /home/runner/work/gatk/gatk/src/test/resources/human_g1k_v37.chr17_1Mb.fasta) 0.027s passed
readsSparkSourceTest[0](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam, null) 0.174s passed
readsSparkSourceTest[1](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam, null) 0.177s passed
readsSparkSourceTest[2](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.bam, null) 0.166s passed
readsSparkSourceTest[3](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram, /home/runner/work/gatk/gatk/src/test/resources/human_g1k_v37.chr17_1Mb.fasta) 0.140s passed
readsSparkSourceTest[4](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.cram, /home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.fasta) 0.129s passed
readsSparkSourceUnknownHostTest 0.005s passed
shardedReadsSparkSourceTest[0](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.bam, /home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.sharded.bam, null) 0.181s passed
testCRAMReferenceFromHDFS 1.183s passed
testCRAMReferenceRequired 0s passed
testCheckCRAMReference 0s passed
testGetHeader[0](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam, null) 0.028s passed
testGetHeader[1](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam, null) 0.031s passed
testGetHeader[2](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.bam, null) 0.029s passed
testGetHeader[3](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram, /home/runner/work/gatk/gatk/src/test/resources/human_g1k_v37.chr17_1Mb.fasta) 0.022s passed
testGetHeader[4](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.cram, /home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.fasta) 0.023s passed
testHeadersAreStripped 0.136s passed
testIntervals 0.094s passed
testIntervalsWithUnmapped 0.085s passed
testPartitionSizing 0.056s passed
testReadFromFileAndHDFS 1.123s passed
testReject2BitCRAMReference 0s passed

Standard error

19:49:01.248 INFO  MemoryStore - Block broadcast_536 stored as values in memory (estimated size 297.9 KiB, free 1916.8 MiB)
19:49:01.254 INFO  MemoryStore - Block broadcast_536_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1916.7 MiB)
19:49:01.255 INFO  BlockManagerInfo - Added broadcast_536_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.3 MiB)
19:49:01.255 INFO  SparkContext - Created broadcast 536 from newAPIHadoopFile at PathSplitSource.java:96
19:49:01.278 INFO  MemoryStore - Block broadcast_537 stored as values in memory (estimated size 297.9 KiB, free 1916.4 MiB)
19:49:01.284 INFO  MemoryStore - Block broadcast_537_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1916.4 MiB)
19:49:01.285 INFO  BlockManagerInfo - Added broadcast_537_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.3 MiB)
19:49:01.285 INFO  SparkContext - Created broadcast 537 from newAPIHadoopFile at PathSplitSource.java:96
19:49:01.308 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
19:49:01.308 INFO  DAGScheduler - Got job 201 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
19:49:01.308 INFO  DAGScheduler - Final stage: ResultStage 265 (collect at ReadsSparkSourceUnitTest.java:111)
19:49:01.308 INFO  DAGScheduler - Parents of final stage: List()
19:49:01.308 INFO  DAGScheduler - Missing parents: List()
19:49:01.308 INFO  DAGScheduler - Submitting ResultStage 265 (ParallelCollectionRDD[1270] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
19:49:01.309 INFO  MemoryStore - Block broadcast_538 stored as values in memory (estimated size 3.0 KiB, free 1916.4 MiB)
19:49:01.309 INFO  MemoryStore - Block broadcast_538_piece0 stored as bytes in memory (estimated size 1777.0 B, free 1916.4 MiB)
19:49:01.309 INFO  BlockManagerInfo - Added broadcast_538_piece0 in memory on localhost:36125 (size: 1777.0 B, free: 1919.3 MiB)
19:49:01.310 INFO  SparkContext - Created broadcast 538 from broadcast at DAGScheduler.scala:1580
19:49:01.310 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 265 (ParallelCollectionRDD[1270] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
19:49:01.310 INFO  TaskSchedulerImpl - Adding task set 265.0 with 4 tasks resource profile 0
19:49:01.311 INFO  TaskSetManager - Starting task 0.0 in stage 265.0 (TID 321) (localhost, executor driver, partition 0, PROCESS_LOCAL, 47080 bytes) 
19:49:01.311 INFO  TaskSetManager - Starting task 1.0 in stage 265.0 (TID 322) (localhost, executor driver, partition 1, PROCESS_LOCAL, 47194 bytes) 
19:49:01.311 INFO  TaskSetManager - Starting task 2.0 in stage 265.0 (TID 323) (localhost, executor driver, partition 2, PROCESS_LOCAL, 47105 bytes) 
19:49:01.311 INFO  TaskSetManager - Starting task 3.0 in stage 265.0 (TID 324) (localhost, executor driver, partition 3, PROCESS_LOCAL, 47530 bytes) 
19:49:01.312 INFO  Executor - Running task 2.0 in stage 265.0 (TID 323)
19:49:01.313 INFO  Executor - Running task 0.0 in stage 265.0 (TID 321)
19:49:01.313 INFO  Executor - Finished task 2.0 in stage 265.0 (TID 323). 40337 bytes result sent to driver
19:49:01.313 INFO  Executor - Running task 3.0 in stage 265.0 (TID 324)
19:49:01.314 INFO  Executor - Finished task 0.0 in stage 265.0 (TID 321). 40312 bytes result sent to driver
19:49:01.314 INFO  Executor - Running task 1.0 in stage 265.0 (TID 322)
19:49:01.314 INFO  Executor - Finished task 3.0 in stage 265.0 (TID 324). 40762 bytes result sent to driver
19:49:01.315 INFO  Executor - Finished task 1.0 in stage 265.0 (TID 322). 40426 bytes result sent to driver
19:49:01.315 INFO  TaskSetManager - Finished task 2.0 in stage 265.0 (TID 323) in 4 ms on localhost (executor driver) (1/4)
19:49:01.316 INFO  TaskSetManager - Finished task 0.0 in stage 265.0 (TID 321) in 6 ms on localhost (executor driver) (2/4)
19:49:01.316 INFO  TaskSetManager - Finished task 1.0 in stage 265.0 (TID 322) in 5 ms on localhost (executor driver) (3/4)
19:49:01.317 INFO  TaskSetManager - Finished task 3.0 in stage 265.0 (TID 324) in 6 ms on localhost (executor driver) (4/4)
19:49:01.317 INFO  TaskSchedulerImpl - Removed TaskSet 265.0, whose tasks have all completed, from pool 
19:49:01.317 INFO  DAGScheduler - ResultStage 265 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.009 s
19:49:01.318 INFO  DAGScheduler - Job 201 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:01.318 INFO  TaskSchedulerImpl - Killing all running tasks in stage 265: Stage finished
19:49:01.318 INFO  DAGScheduler - Job 201 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.010439 s
19:49:01.319 INFO  FileInputFormat - Total input files to process : 1
19:49:01.360 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
19:49:01.361 INFO  DAGScheduler - Got job 202 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
19:49:01.361 INFO  DAGScheduler - Final stage: ResultStage 266 (collect at ReadsSparkSourceUnitTest.java:112)
19:49:01.361 INFO  DAGScheduler - Parents of final stage: List()
19:49:01.361 INFO  DAGScheduler - Missing parents: List()
19:49:01.361 INFO  DAGScheduler - Submitting ResultStage 266 (MapPartitionsRDD[1277] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:01.380 INFO  MemoryStore - Block broadcast_539 stored as values in memory (estimated size 426.2 KiB, free 1916.0 MiB)
19:49:01.382 INFO  MemoryStore - Block broadcast_539_piece0 stored as bytes in memory (estimated size 153.6 KiB, free 1915.8 MiB)
19:49:01.382 INFO  BlockManagerInfo - Added broadcast_539_piece0 in memory on localhost:36125 (size: 153.6 KiB, free: 1919.1 MiB)
19:49:01.382 INFO  SparkContext - Created broadcast 539 from broadcast at DAGScheduler.scala:1580
19:49:01.382 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 266 (MapPartitionsRDD[1277] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:01.382 INFO  TaskSchedulerImpl - Adding task set 266.0 with 1 tasks resource profile 0
19:49:01.383 INFO  TaskSetManager - Starting task 0.0 in stage 266.0 (TID 325) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes) 
19:49:01.383 INFO  Executor - Running task 0.0 in stage 266.0 (TID 325)
19:49:01.415 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
19:49:01.418 INFO  Executor - Finished task 0.0 in stage 266.0 (TID 325). 159572 bytes result sent to driver
19:49:01.419 INFO  TaskSetManager - Finished task 0.0 in stage 266.0 (TID 325) in 36 ms on localhost (executor driver) (1/1)
19:49:01.419 INFO  TaskSchedulerImpl - Removed TaskSet 266.0, whose tasks have all completed, from pool 
19:49:01.419 INFO  DAGScheduler - ResultStage 266 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.058 s
19:49:01.419 INFO  DAGScheduler - Job 202 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:01.419 INFO  TaskSchedulerImpl - Killing all running tasks in stage 266: Stage finished
19:49:01.419 INFO  DAGScheduler - Job 202 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.059229 s
19:49:01.423 INFO  MemoryStore - Block broadcast_540 stored as values in memory (estimated size 298.0 KiB, free 1915.5 MiB)
19:49:01.429 INFO  BlockManagerInfo - Removed broadcast_538_piece0 on localhost:36125 in memory (size: 1777.0 B, free: 1919.2 MiB)
19:49:01.429 INFO  BlockManagerInfo - Removed broadcast_531_piece0 on localhost:36125 in memory (size: 233.0 B, free: 1919.2 MiB)
19:49:01.430 INFO  BlockManagerInfo - Removed broadcast_539_piece0 on localhost:36125 in memory (size: 153.6 KiB, free: 1919.3 MiB)
19:49:01.430 INFO  BlockManagerInfo - Removed broadcast_535_piece0 on localhost:36125 in memory (size: 54.5 KiB, free: 1919.4 MiB)
19:49:01.430 INFO  BlockManagerInfo - Removed broadcast_529_piece0 on localhost:36125 in memory (size: 166.1 KiB, free: 1919.5 MiB)
19:49:01.431 INFO  BlockManagerInfo - Removed broadcast_534_piece0 on localhost:36125 in memory (size: 153.6 KiB, free: 1919.7 MiB)
19:49:01.431 INFO  BlockManagerInfo - Removed broadcast_530_piece0 on localhost:36125 in memory (size: 67.0 KiB, free: 1919.7 MiB)
19:49:01.431 INFO  BlockManagerInfo - Removed broadcast_525_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.8 MiB)
19:49:01.432 INFO  BlockManagerInfo - Removed broadcast_528_piece0 on localhost:36125 in memory (size: 9.6 KiB, free: 1919.8 MiB)
19:49:01.432 INFO  BlockManagerInfo - Removed broadcast_537_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.8 MiB)
19:49:01.432 INFO  BlockManagerInfo - Removed broadcast_533_piece0 on localhost:36125 in memory (size: 54.5 KiB, free: 1919.9 MiB)
19:49:01.433 INFO  BlockManagerInfo - Removed broadcast_536_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.9 MiB)
19:49:01.433 INFO  BlockManagerInfo - Removed broadcast_527_piece0 on localhost:36125 in memory (size: 9.6 KiB, free: 1920.0 MiB)
19:49:01.434 INFO  BlockManagerInfo - Removed broadcast_532_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1920.0 MiB)
19:49:01.435 INFO  MemoryStore - Block broadcast_540_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.7 MiB)
19:49:01.435 INFO  BlockManagerInfo - Added broadcast_540_piece0 in memory on localhost:36125 (size: 50.3 KiB, free: 1920.0 MiB)
19:49:01.435 INFO  SparkContext - Created broadcast 540 from newAPIHadoopFile at PathSplitSource.java:96
19:49:01.458 INFO  MemoryStore - Block broadcast_541 stored as values in memory (estimated size 298.0 KiB, free 1919.4 MiB)
19:49:01.464 INFO  MemoryStore - Block broadcast_541_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.3 MiB)
19:49:01.464 INFO  BlockManagerInfo - Added broadcast_541_piece0 in memory on localhost:36125 (size: 50.3 KiB, free: 1919.9 MiB)
19:49:01.465 INFO  SparkContext - Created broadcast 541 from newAPIHadoopFile at PathSplitSource.java:96
19:49:01.487 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
19:49:01.487 INFO  DAGScheduler - Got job 203 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
19:49:01.487 INFO  DAGScheduler - Final stage: ResultStage 267 (collect at ReadsSparkSourceUnitTest.java:111)
19:49:01.487 INFO  DAGScheduler - Parents of final stage: List()
19:49:01.487 INFO  DAGScheduler - Missing parents: List()
19:49:01.487 INFO  DAGScheduler - Submitting ResultStage 267 (ParallelCollectionRDD[1283] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
19:49:01.488 INFO  MemoryStore - Block broadcast_542 stored as values in memory (estimated size 3.0 KiB, free 1919.3 MiB)
19:49:01.488 INFO  MemoryStore - Block broadcast_542_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1919.3 MiB)
19:49:01.488 INFO  BlockManagerInfo - Added broadcast_542_piece0 in memory on localhost:36125 (size: 1778.0 B, free: 1919.9 MiB)
19:49:01.489 INFO  SparkContext - Created broadcast 542 from broadcast at DAGScheduler.scala:1580
19:49:01.489 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 267 (ParallelCollectionRDD[1283] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
19:49:01.489 INFO  TaskSchedulerImpl - Adding task set 267.0 with 4 tasks resource profile 0
19:49:01.490 INFO  TaskSetManager - Starting task 0.0 in stage 267.0 (TID 326) (localhost, executor driver, partition 0, PROCESS_LOCAL, 169886 bytes) 
19:49:01.491 INFO  TaskSetManager - Starting task 1.0 in stage 267.0 (TID 327) (localhost, executor driver, partition 1, PROCESS_LOCAL, 170093 bytes) 
19:49:01.491 INFO  TaskSetManager - Starting task 2.0 in stage 267.0 (TID 328) (localhost, executor driver, partition 2, PROCESS_LOCAL, 170081 bytes) 
19:49:01.492 INFO  TaskSetManager - Starting task 3.0 in stage 267.0 (TID 329) (localhost, executor driver, partition 3, PROCESS_LOCAL, 169419 bytes) 
19:49:01.492 INFO  Executor - Running task 0.0 in stage 267.0 (TID 326)
19:49:01.492 INFO  Executor - Running task 2.0 in stage 267.0 (TID 328)
19:49:01.492 INFO  Executor - Running task 1.0 in stage 267.0 (TID 327)
19:49:01.492 INFO  Executor - Running task 3.0 in stage 267.0 (TID 329)
19:49:01.494 INFO  Executor - Finished task 2.0 in stage 267.0 (TID 328). 163356 bytes result sent to driver
19:49:01.494 INFO  Executor - Finished task 1.0 in stage 267.0 (TID 327). 163368 bytes result sent to driver
19:49:01.494 INFO  Executor - Finished task 0.0 in stage 267.0 (TID 326). 163161 bytes result sent to driver
19:49:01.495 INFO  Executor - Finished task 3.0 in stage 267.0 (TID 329). 162694 bytes result sent to driver
19:49:01.496 INFO  TaskSetManager - Finished task 2.0 in stage 267.0 (TID 328) in 5 ms on localhost (executor driver) (1/4)
19:49:01.496 INFO  TaskSetManager - Finished task 1.0 in stage 267.0 (TID 327) in 6 ms on localhost (executor driver) (2/4)
19:49:01.496 INFO  TaskSetManager - Finished task 3.0 in stage 267.0 (TID 329) in 5 ms on localhost (executor driver) (3/4)
19:49:01.496 INFO  TaskSetManager - Finished task 0.0 in stage 267.0 (TID 326) in 6 ms on localhost (executor driver) (4/4)
19:49:01.497 INFO  TaskSchedulerImpl - Removed TaskSet 267.0, whose tasks have all completed, from pool 
19:49:01.497 INFO  DAGScheduler - ResultStage 267 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.009 s
19:49:01.497 INFO  DAGScheduler - Job 203 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:01.497 INFO  TaskSchedulerImpl - Killing all running tasks in stage 267: Stage finished
19:49:01.497 INFO  DAGScheduler - Job 203 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.009739 s
19:49:01.497 INFO  FileInputFormat - Total input files to process : 1
19:49:01.533 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
19:49:01.533 INFO  DAGScheduler - Got job 204 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
19:49:01.533 INFO  DAGScheduler - Final stage: ResultStage 268 (collect at ReadsSparkSourceUnitTest.java:112)
19:49:01.533 INFO  DAGScheduler - Parents of final stage: List()
19:49:01.533 INFO  DAGScheduler - Missing parents: List()
19:49:01.534 INFO  DAGScheduler - Submitting ResultStage 268 (MapPartitionsRDD[1290] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:01.554 INFO  MemoryStore - Block broadcast_543 stored as values in memory (estimated size 426.2 KiB, free 1918.9 MiB)
19:49:01.555 INFO  MemoryStore - Block broadcast_543_piece0 stored as bytes in memory (estimated size 153.6 KiB, free 1918.7 MiB)
19:49:01.555 INFO  BlockManagerInfo - Added broadcast_543_piece0 in memory on localhost:36125 (size: 153.6 KiB, free: 1919.8 MiB)
19:49:01.555 INFO  SparkContext - Created broadcast 543 from broadcast at DAGScheduler.scala:1580
19:49:01.555 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 268 (MapPartitionsRDD[1290] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:01.555 INFO  TaskSchedulerImpl - Adding task set 268.0 with 1 tasks resource profile 0
19:49:01.556 INFO  TaskSetManager - Starting task 0.0 in stage 268.0 (TID 330) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7883 bytes) 
19:49:01.556 INFO  Executor - Running task 0.0 in stage 268.0 (TID 330)
19:49:01.585 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam:0+221635
19:49:01.595 INFO  Executor - Finished task 0.0 in stage 268.0 (TID 330). 650141 bytes result sent to driver
19:49:01.597 INFO  TaskSetManager - Finished task 0.0 in stage 268.0 (TID 330) in 41 ms on localhost (executor driver) (1/1)
19:49:01.597 INFO  TaskSchedulerImpl - Removed TaskSet 268.0, whose tasks have all completed, from pool 
19:49:01.597 INFO  DAGScheduler - ResultStage 268 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.063 s
19:49:01.597 INFO  DAGScheduler - Job 204 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:01.597 INFO  TaskSchedulerImpl - Killing all running tasks in stage 268: Stage finished
19:49:01.597 INFO  DAGScheduler - Job 204 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.063984 s
19:49:01.599 INFO  MemoryStore - Block broadcast_544 stored as values in memory (estimated size 298.0 KiB, free 1918.5 MiB)
19:49:01.605 INFO  MemoryStore - Block broadcast_544_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.4 MiB)
19:49:01.605 INFO  BlockManagerInfo - Added broadcast_544_piece0 in memory on localhost:36125 (size: 50.3 KiB, free: 1919.7 MiB)
19:49:01.606 INFO  SparkContext - Created broadcast 544 from newAPIHadoopFile at PathSplitSource.java:96
19:49:01.629 INFO  MemoryStore - Block broadcast_545 stored as values in memory (estimated size 298.0 KiB, free 1918.1 MiB)
19:49:01.635 INFO  MemoryStore - Block broadcast_545_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.1 MiB)
19:49:01.635 INFO  BlockManagerInfo - Added broadcast_545_piece0 in memory on localhost:36125 (size: 50.3 KiB, free: 1919.7 MiB)
19:49:01.635 INFO  SparkContext - Created broadcast 545 from newAPIHadoopFile at PathSplitSource.java:96
19:49:01.657 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
19:49:01.658 INFO  DAGScheduler - Got job 205 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
19:49:01.658 INFO  DAGScheduler - Final stage: ResultStage 269 (collect at ReadsSparkSourceUnitTest.java:111)
19:49:01.658 INFO  DAGScheduler - Parents of final stage: List()
19:49:01.658 INFO  DAGScheduler - Missing parents: List()
19:49:01.658 INFO  DAGScheduler - Submitting ResultStage 269 (ParallelCollectionRDD[1296] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
19:49:01.658 INFO  MemoryStore - Block broadcast_546 stored as values in memory (estimated size 3.0 KiB, free 1918.1 MiB)
19:49:01.658 INFO  MemoryStore - Block broadcast_546_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.1 MiB)
19:49:01.658 INFO  BlockManagerInfo - Added broadcast_546_piece0 in memory on localhost:36125 (size: 1778.0 B, free: 1919.7 MiB)
19:49:01.659 INFO  SparkContext - Created broadcast 546 from broadcast at DAGScheduler.scala:1580
19:49:01.659 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 269 (ParallelCollectionRDD[1296] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
19:49:01.659 INFO  TaskSchedulerImpl - Adding task set 269.0 with 4 tasks resource profile 0
19:49:01.659 INFO  TaskSetManager - Starting task 0.0 in stage 269.0 (TID 331) (localhost, executor driver, partition 0, PROCESS_LOCAL, 169886 bytes) 
19:49:01.660 INFO  TaskSetManager - Starting task 1.0 in stage 269.0 (TID 332) (localhost, executor driver, partition 1, PROCESS_LOCAL, 170093 bytes) 
19:49:01.660 INFO  TaskSetManager - Starting task 2.0 in stage 269.0 (TID 333) (localhost, executor driver, partition 2, PROCESS_LOCAL, 170081 bytes) 
19:49:01.660 INFO  TaskSetManager - Starting task 3.0 in stage 269.0 (TID 334) (localhost, executor driver, partition 3, PROCESS_LOCAL, 169419 bytes) 
19:49:01.660 INFO  Executor - Running task 2.0 in stage 269.0 (TID 333)
19:49:01.660 INFO  Executor - Running task 1.0 in stage 269.0 (TID 332)
19:49:01.660 INFO  Executor - Running task 3.0 in stage 269.0 (TID 334)
19:49:01.660 INFO  Executor - Running task 0.0 in stage 269.0 (TID 331)
19:49:01.662 INFO  Executor - Finished task 2.0 in stage 269.0 (TID 333). 163356 bytes result sent to driver
19:49:01.662 INFO  Executor - Finished task 1.0 in stage 269.0 (TID 332). 163368 bytes result sent to driver
19:49:01.663 INFO  Executor - Finished task 3.0 in stage 269.0 (TID 334). 162694 bytes result sent to driver
19:49:01.663 INFO  Executor - Finished task 0.0 in stage 269.0 (TID 331). 163161 bytes result sent to driver
19:49:01.664 INFO  TaskSetManager - Finished task 3.0 in stage 269.0 (TID 334) in 4 ms on localhost (executor driver) (1/4)
19:49:01.664 INFO  TaskSetManager - Finished task 1.0 in stage 269.0 (TID 332) in 5 ms on localhost (executor driver) (2/4)
19:49:01.664 INFO  TaskSetManager - Finished task 0.0 in stage 269.0 (TID 331) in 5 ms on localhost (executor driver) (3/4)
19:49:01.664 INFO  TaskSetManager - Finished task 2.0 in stage 269.0 (TID 333) in 4 ms on localhost (executor driver) (4/4)
19:49:01.664 INFO  TaskSchedulerImpl - Removed TaskSet 269.0, whose tasks have all completed, from pool 
19:49:01.664 INFO  DAGScheduler - ResultStage 269 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.006 s
19:49:01.665 INFO  DAGScheduler - Job 205 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:01.665 INFO  TaskSchedulerImpl - Killing all running tasks in stage 269: Stage finished
19:49:01.665 INFO  DAGScheduler - Job 205 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.007377 s
19:49:01.665 INFO  FileInputFormat - Total input files to process : 1
19:49:01.700 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
19:49:01.701 INFO  DAGScheduler - Got job 206 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
19:49:01.701 INFO  DAGScheduler - Final stage: ResultStage 270 (collect at ReadsSparkSourceUnitTest.java:112)
19:49:01.701 INFO  DAGScheduler - Parents of final stage: List()
19:49:01.701 INFO  DAGScheduler - Missing parents: List()
19:49:01.701 INFO  DAGScheduler - Submitting ResultStage 270 (MapPartitionsRDD[1303] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:01.718 INFO  MemoryStore - Block broadcast_547 stored as values in memory (estimated size 426.2 KiB, free 1917.6 MiB)
19:49:01.719 INFO  MemoryStore - Block broadcast_547_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1917.5 MiB)
19:49:01.719 INFO  BlockManagerInfo - Added broadcast_547_piece0 in memory on localhost:36125 (size: 153.7 KiB, free: 1919.5 MiB)
19:49:01.719 INFO  SparkContext - Created broadcast 547 from broadcast at DAGScheduler.scala:1580
19:49:01.720 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 270 (MapPartitionsRDD[1303] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:01.720 INFO  TaskSchedulerImpl - Adding task set 270.0 with 1 tasks resource profile 0
19:49:01.720 INFO  TaskSetManager - Starting task 0.0 in stage 270.0 (TID 335) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7912 bytes) 
19:49:01.720 INFO  Executor - Running task 0.0 in stage 270.0 (TID 335)
19:49:01.749 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.bam:0+216896
19:49:01.761 INFO  Executor - Finished task 0.0 in stage 270.0 (TID 335). 650141 bytes result sent to driver
19:49:01.762 INFO  TaskSetManager - Finished task 0.0 in stage 270.0 (TID 335) in 42 ms on localhost (executor driver) (1/1)
19:49:01.762 INFO  TaskSchedulerImpl - Removed TaskSet 270.0, whose tasks have all completed, from pool 
19:49:01.763 INFO  DAGScheduler - ResultStage 270 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.061 s
19:49:01.763 INFO  DAGScheduler - Job 206 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:01.763 INFO  TaskSchedulerImpl - Killing all running tasks in stage 270: Stage finished
19:49:01.763 INFO  DAGScheduler - Job 206 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.062242 s
19:49:01.765 INFO  MemoryStore - Block broadcast_548 stored as values in memory (estimated size 536.0 B, free 1917.5 MiB)
19:49:01.765 INFO  MemoryStore - Block broadcast_548_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.5 MiB)
19:49:01.765 INFO  BlockManagerInfo - Added broadcast_548_piece0 in memory on localhost:36125 (size: 187.0 B, free: 1919.5 MiB)
19:49:01.766 INFO  SparkContext - Created broadcast 548 from broadcast at CramSource.java:114
19:49:01.766 INFO  MemoryStore - Block broadcast_549 stored as values in memory (estimated size 297.9 KiB, free 1917.2 MiB)
19:49:01.772 INFO  MemoryStore - Block broadcast_549_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.2 MiB)
19:49:01.772 INFO  BlockManagerInfo - Added broadcast_549_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.5 MiB)
19:49:01.773 INFO  SparkContext - Created broadcast 549 from newAPIHadoopFile at PathSplitSource.java:96
19:49:01.791 INFO  MemoryStore - Block broadcast_550 stored as values in memory (estimated size 536.0 B, free 1917.2 MiB)
19:49:01.795 INFO  MemoryStore - Block broadcast_550_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.2 MiB)
19:49:01.796 INFO  BlockManagerInfo - Added broadcast_550_piece0 in memory on localhost:36125 (size: 187.0 B, free: 1919.5 MiB)
19:49:01.796 INFO  BlockManagerInfo - Removed broadcast_544_piece0 on localhost:36125 in memory (size: 50.3 KiB, free: 1919.5 MiB)
19:49:01.796 INFO  SparkContext - Created broadcast 550 from broadcast at CramSource.java:114
19:49:01.796 INFO  BlockManagerInfo - Removed broadcast_540_piece0 on localhost:36125 in memory (size: 50.3 KiB, free: 1919.5 MiB)
19:49:01.796 INFO  BlockManagerInfo - Removed broadcast_543_piece0 on localhost:36125 in memory (size: 153.6 KiB, free: 1919.7 MiB)
19:49:01.797 INFO  BlockManagerInfo - Removed broadcast_541_piece0 on localhost:36125 in memory (size: 50.3 KiB, free: 1919.7 MiB)
19:49:01.797 INFO  MemoryStore - Block broadcast_551 stored as values in memory (estimated size 297.9 KiB, free 1918.5 MiB)
19:49:01.797 INFO  BlockManagerInfo - Removed broadcast_547_piece0 on localhost:36125 in memory (size: 153.7 KiB, free: 1919.9 MiB)
19:49:01.797 INFO  BlockManagerInfo - Removed broadcast_545_piece0 on localhost:36125 in memory (size: 50.3 KiB, free: 1919.9 MiB)
19:49:01.798 INFO  BlockManagerInfo - Removed broadcast_546_piece0 on localhost:36125 in memory (size: 1778.0 B, free: 1919.9 MiB)
19:49:01.798 INFO  BlockManagerInfo - Removed broadcast_542_piece0 on localhost:36125 in memory (size: 1778.0 B, free: 1920.0 MiB)
19:49:01.799 INFO  BlockManagerInfo - Removed broadcast_548_piece0 on localhost:36125 in memory (size: 187.0 B, free: 1920.0 MiB)
19:49:01.799 INFO  BlockManagerInfo - Removed broadcast_549_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1920.0 MiB)
19:49:01.804 INFO  MemoryStore - Block broadcast_551_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
19:49:01.804 INFO  BlockManagerInfo - Added broadcast_551_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1920.0 MiB)
19:49:01.804 INFO  SparkContext - Created broadcast 551 from newAPIHadoopFile at PathSplitSource.java:96
19:49:01.821 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
19:49:01.821 INFO  DAGScheduler - Got job 207 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
19:49:01.821 INFO  DAGScheduler - Final stage: ResultStage 271 (collect at ReadsSparkSourceUnitTest.java:111)
19:49:01.821 INFO  DAGScheduler - Parents of final stage: List()
19:49:01.822 INFO  DAGScheduler - Missing parents: List()
19:49:01.822 INFO  DAGScheduler - Submitting ResultStage 271 (ParallelCollectionRDD[1308] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
19:49:01.822 INFO  MemoryStore - Block broadcast_552 stored as values in memory (estimated size 3.0 KiB, free 1919.7 MiB)
19:49:01.822 INFO  MemoryStore - Block broadcast_552_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1919.7 MiB)
19:49:01.822 INFO  BlockManagerInfo - Added broadcast_552_piece0 in memory on localhost:36125 (size: 1778.0 B, free: 1919.9 MiB)
19:49:01.823 INFO  SparkContext - Created broadcast 552 from broadcast at DAGScheduler.scala:1580
19:49:01.823 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 271 (ParallelCollectionRDD[1308] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
19:49:01.823 INFO  TaskSchedulerImpl - Adding task set 271.0 with 4 tasks resource profile 0
19:49:01.823 INFO  TaskSetManager - Starting task 0.0 in stage 271.0 (TID 336) (localhost, executor driver, partition 0, PROCESS_LOCAL, 45675 bytes) 
19:49:01.824 INFO  TaskSetManager - Starting task 1.0 in stage 271.0 (TID 337) (localhost, executor driver, partition 1, PROCESS_LOCAL, 45855 bytes) 
19:49:01.824 INFO  TaskSetManager - Starting task 2.0 in stage 271.0 (TID 338) (localhost, executor driver, partition 2, PROCESS_LOCAL, 45795 bytes) 
19:49:01.824 INFO  TaskSetManager - Starting task 3.0 in stage 271.0 (TID 339) (localhost, executor driver, partition 3, PROCESS_LOCAL, 46068 bytes) 
19:49:01.825 INFO  Executor - Running task 2.0 in stage 271.0 (TID 338)
19:49:01.825 INFO  Executor - Running task 3.0 in stage 271.0 (TID 339)
19:49:01.825 INFO  Executor - Running task 1.0 in stage 271.0 (TID 337)
19:49:01.825 INFO  Executor - Running task 0.0 in stage 271.0 (TID 336)
19:49:01.826 INFO  Executor - Finished task 1.0 in stage 271.0 (TID 337). 39087 bytes result sent to driver
19:49:01.826 INFO  Executor - Finished task 3.0 in stage 271.0 (TID 339). 39300 bytes result sent to driver
19:49:01.826 INFO  Executor - Finished task 2.0 in stage 271.0 (TID 338). 39070 bytes result sent to driver
19:49:01.826 INFO  TaskSetManager - Finished task 3.0 in stage 271.0 (TID 339) in 2 ms on localhost (executor driver) (1/4)
19:49:01.827 INFO  Executor - Finished task 0.0 in stage 271.0 (TID 336). 38950 bytes result sent to driver
19:49:01.827 INFO  TaskSetManager - Finished task 2.0 in stage 271.0 (TID 338) in 3 ms on localhost (executor driver) (2/4)
19:49:01.827 INFO  TaskSetManager - Finished task 1.0 in stage 271.0 (TID 337) in 4 ms on localhost (executor driver) (3/4)
19:49:01.827 INFO  TaskSetManager - Finished task 0.0 in stage 271.0 (TID 336) in 4 ms on localhost (executor driver) (4/4)
19:49:01.827 INFO  TaskSchedulerImpl - Removed TaskSet 271.0, whose tasks have all completed, from pool 
19:49:01.827 INFO  DAGScheduler - ResultStage 271 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.005 s
19:49:01.827 INFO  DAGScheduler - Job 207 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:01.827 INFO  TaskSchedulerImpl - Killing all running tasks in stage 271: Stage finished
19:49:01.827 INFO  DAGScheduler - Job 207 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.006216 s
19:49:01.828 INFO  FileInputFormat - Total input files to process : 1
19:49:01.853 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
19:49:01.853 INFO  DAGScheduler - Got job 208 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
19:49:01.853 INFO  DAGScheduler - Final stage: ResultStage 272 (collect at ReadsSparkSourceUnitTest.java:112)
19:49:01.853 INFO  DAGScheduler - Parents of final stage: List()
19:49:01.853 INFO  DAGScheduler - Missing parents: List()
19:49:01.854 INFO  DAGScheduler - Submitting ResultStage 272 (MapPartitionsRDD[1314] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:01.865 INFO  MemoryStore - Block broadcast_553 stored as values in memory (estimated size 286.8 KiB, free 1919.4 MiB)
19:49:01.866 INFO  MemoryStore - Block broadcast_553_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1919.3 MiB)
19:49:01.866 INFO  BlockManagerInfo - Added broadcast_553_piece0 in memory on localhost:36125 (size: 103.6 KiB, free: 1919.8 MiB)
19:49:01.866 INFO  SparkContext - Created broadcast 553 from broadcast at DAGScheduler.scala:1580
19:49:01.866 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 272 (MapPartitionsRDD[1314] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:01.866 INFO  TaskSchedulerImpl - Adding task set 272.0 with 1 tasks resource profile 0
19:49:01.867 INFO  TaskSetManager - Starting task 0.0 in stage 272.0 (TID 340) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes) 
19:49:01.867 INFO  Executor - Running task 0.0 in stage 272.0 (TID 340)
19:49:01.894 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
19:49:01.901 INFO  Executor - Finished task 0.0 in stage 272.0 (TID 340). 154058 bytes result sent to driver
19:49:01.902 INFO  TaskSetManager - Finished task 0.0 in stage 272.0 (TID 340) in 36 ms on localhost (executor driver) (1/1)
19:49:01.902 INFO  TaskSchedulerImpl - Removed TaskSet 272.0, whose tasks have all completed, from pool 
19:49:01.902 INFO  DAGScheduler - ResultStage 272 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.048 s
19:49:01.902 INFO  DAGScheduler - Job 208 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:01.902 INFO  TaskSchedulerImpl - Killing all running tasks in stage 272: Stage finished
19:49:01.902 INFO  DAGScheduler - Job 208 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.049261 s
19:49:01.906 INFO  MemoryStore - Block broadcast_554 stored as values in memory (estimated size 608.0 B, free 1919.3 MiB)
19:49:01.906 INFO  MemoryStore - Block broadcast_554_piece0 stored as bytes in memory (estimated size 206.0 B, free 1919.3 MiB)
19:49:01.906 INFO  BlockManagerInfo - Added broadcast_554_piece0 in memory on localhost:36125 (size: 206.0 B, free: 1919.8 MiB)
19:49:01.907 INFO  SparkContext - Created broadcast 554 from broadcast at CramSource.java:114
19:49:01.907 INFO  MemoryStore - Block broadcast_555 stored as values in memory (estimated size 297.9 KiB, free 1919.0 MiB)
19:49:01.917 INFO  MemoryStore - Block broadcast_555_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.9 MiB)
19:49:01.918 INFO  BlockManagerInfo - Added broadcast_555_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.8 MiB)
19:49:01.918 INFO  SparkContext - Created broadcast 555 from newAPIHadoopFile at PathSplitSource.java:96
19:49:01.935 INFO  MemoryStore - Block broadcast_556 stored as values in memory (estimated size 608.0 B, free 1918.9 MiB)
19:49:01.936 INFO  MemoryStore - Block broadcast_556_piece0 stored as bytes in memory (estimated size 206.0 B, free 1918.9 MiB)
19:49:01.936 INFO  BlockManagerInfo - Added broadcast_556_piece0 in memory on localhost:36125 (size: 206.0 B, free: 1919.8 MiB)
19:49:01.936 INFO  SparkContext - Created broadcast 556 from broadcast at CramSource.java:114
19:49:01.937 INFO  MemoryStore - Block broadcast_557 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
19:49:01.943 INFO  MemoryStore - Block broadcast_557_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
19:49:01.943 INFO  BlockManagerInfo - Added broadcast_557_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.7 MiB)
19:49:01.943 INFO  SparkContext - Created broadcast 557 from newAPIHadoopFile at PathSplitSource.java:96
19:49:01.960 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
19:49:01.960 INFO  DAGScheduler - Got job 209 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
19:49:01.960 INFO  DAGScheduler - Final stage: ResultStage 273 (collect at ReadsSparkSourceUnitTest.java:111)
19:49:01.960 INFO  DAGScheduler - Parents of final stage: List()
19:49:01.960 INFO  DAGScheduler - Missing parents: List()
19:49:01.960 INFO  DAGScheduler - Submitting ResultStage 273 (ParallelCollectionRDD[1319] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
19:49:01.960 INFO  MemoryStore - Block broadcast_558 stored as values in memory (estimated size 3.0 KiB, free 1918.6 MiB)
19:49:01.961 INFO  MemoryStore - Block broadcast_558_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.6 MiB)
19:49:01.961 INFO  BlockManagerInfo - Added broadcast_558_piece0 in memory on localhost:36125 (size: 1778.0 B, free: 1919.7 MiB)
19:49:01.961 INFO  SparkContext - Created broadcast 558 from broadcast at DAGScheduler.scala:1580
19:49:01.961 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 273 (ParallelCollectionRDD[1319] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
19:49:01.961 INFO  TaskSchedulerImpl - Adding task set 273.0 with 4 tasks resource profile 0
19:49:01.961 INFO  TaskSetManager - Starting task 0.0 in stage 273.0 (TID 341) (localhost, executor driver, partition 0, PROCESS_LOCAL, 8303 bytes) 
19:49:01.962 INFO  TaskSetManager - Starting task 1.0 in stage 273.0 (TID 342) (localhost, executor driver, partition 1, PROCESS_LOCAL, 8292 bytes) 
19:49:01.962 INFO  TaskSetManager - Starting task 2.0 in stage 273.0 (TID 343) (localhost, executor driver, partition 2, PROCESS_LOCAL, 8292 bytes) 
19:49:01.962 INFO  TaskSetManager - Starting task 3.0 in stage 273.0 (TID 344) (localhost, executor driver, partition 3, PROCESS_LOCAL, 8184 bytes) 
19:49:01.962 INFO  Executor - Running task 0.0 in stage 273.0 (TID 341)
19:49:01.962 INFO  Executor - Running task 1.0 in stage 273.0 (TID 342)
19:49:01.962 INFO  Executor - Running task 3.0 in stage 273.0 (TID 344)
19:49:01.963 INFO  Executor - Finished task 3.0 in stage 273.0 (TID 344). 1416 bytes result sent to driver
19:49:01.963 INFO  Executor - Finished task 0.0 in stage 273.0 (TID 341). 1535 bytes result sent to driver
19:49:01.963 INFO  Executor - Finished task 1.0 in stage 273.0 (TID 342). 1524 bytes result sent to driver
19:49:01.963 INFO  Executor - Running task 2.0 in stage 273.0 (TID 343)
19:49:01.964 INFO  TaskSetManager - Finished task 3.0 in stage 273.0 (TID 344) in 2 ms on localhost (executor driver) (1/4)
19:49:01.964 INFO  TaskSetManager - Finished task 1.0 in stage 273.0 (TID 342) in 3 ms on localhost (executor driver) (2/4)
19:49:01.964 INFO  Executor - Finished task 2.0 in stage 273.0 (TID 343). 1524 bytes result sent to driver
19:49:01.964 INFO  TaskSetManager - Finished task 0.0 in stage 273.0 (TID 341) in 3 ms on localhost (executor driver) (3/4)
19:49:01.965 INFO  TaskSetManager - Finished task 2.0 in stage 273.0 (TID 343) in 3 ms on localhost (executor driver) (4/4)
19:49:01.965 INFO  TaskSchedulerImpl - Removed TaskSet 273.0, whose tasks have all completed, from pool 
19:49:01.965 INFO  DAGScheduler - ResultStage 273 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.005 s
19:49:01.965 INFO  DAGScheduler - Job 209 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:01.965 INFO  TaskSchedulerImpl - Killing all running tasks in stage 273: Stage finished
19:49:01.965 INFO  DAGScheduler - Job 209 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.005317 s
19:49:01.965 INFO  FileInputFormat - Total input files to process : 1
19:49:01.990 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
19:49:01.990 INFO  DAGScheduler - Got job 210 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
19:49:01.990 INFO  DAGScheduler - Final stage: ResultStage 274 (collect at ReadsSparkSourceUnitTest.java:112)
19:49:01.990 INFO  DAGScheduler - Parents of final stage: List()
19:49:01.990 INFO  DAGScheduler - Missing parents: List()
19:49:01.990 INFO  DAGScheduler - Submitting ResultStage 274 (MapPartitionsRDD[1325] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:02.001 INFO  MemoryStore - Block broadcast_559 stored as values in memory (estimated size 286.8 KiB, free 1918.3 MiB)
19:49:02.002 INFO  MemoryStore - Block broadcast_559_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.2 MiB)
19:49:02.002 INFO  BlockManagerInfo - Added broadcast_559_piece0 in memory on localhost:36125 (size: 103.6 KiB, free: 1919.6 MiB)
19:49:02.002 INFO  SparkContext - Created broadcast 559 from broadcast at DAGScheduler.scala:1580
19:49:02.002 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 274 (MapPartitionsRDD[1325] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:02.002 INFO  TaskSchedulerImpl - Adding task set 274.0 with 1 tasks resource profile 0
19:49:02.003 INFO  TaskSetManager - Starting task 0.0 in stage 274.0 (TID 345) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7851 bytes) 
19:49:02.003 INFO  Executor - Running task 0.0 in stage 274.0 (TID 345)
19:49:02.023 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.cram:0+13330
19:49:02.031 INFO  Executor - Finished task 0.0 in stage 274.0 (TID 345). 3822 bytes result sent to driver
19:49:02.031 INFO  BlockManagerInfo - Removed broadcast_558_piece0 on localhost:36125 in memory (size: 1778.0 B, free: 1919.6 MiB)
19:49:02.032 INFO  TaskSetManager - Finished task 0.0 in stage 274.0 (TID 345) in 28 ms on localhost (executor driver) (1/1)
19:49:02.032 INFO  TaskSchedulerImpl - Removed TaskSet 274.0, whose tasks have all completed, from pool 
19:49:02.032 INFO  DAGScheduler - ResultStage 274 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.042 s
19:49:02.032 INFO  DAGScheduler - Job 210 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:02.032 INFO  TaskSchedulerImpl - Killing all running tasks in stage 274: Stage finished
19:49:02.032 INFO  DAGScheduler - Job 210 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.042276 s
19:49:02.032 INFO  BlockManagerInfo - Removed broadcast_554_piece0 on localhost:36125 in memory (size: 206.0 B, free: 1919.6 MiB)
19:49:02.033 INFO  BlockManagerInfo - Removed broadcast_550_piece0 on localhost:36125 in memory (size: 187.0 B, free: 1919.6 MiB)
19:49:02.033 INFO  BlockManagerInfo - Removed broadcast_551_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.7 MiB)
19:49:02.034 INFO  BlockManagerInfo - Removed broadcast_555_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.7 MiB)
19:49:02.034 INFO  BlockManagerInfo - Removed broadcast_553_piece0 on localhost:36125 in memory (size: 103.6 KiB, free: 1919.8 MiB)
19:49:02.035 INFO  BlockManagerInfo - Removed broadcast_552_piece0 on localhost:36125 in memory (size: 1778.0 B, free: 1919.8 MiB)
19:49:02.035 INFO  MemoryStore - Block broadcast_560 stored as values in memory (estimated size 297.9 KiB, free 1919.0 MiB)
19:49:02.042 INFO  MemoryStore - Block broadcast_560_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.9 MiB)
19:49:02.042 INFO  BlockManagerInfo - Added broadcast_560_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.8 MiB)
19:49:02.042 INFO  SparkContext - Created broadcast 560 from newAPIHadoopFile at PathSplitSource.java:96
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
19:49:02.065 INFO  MemoryStore - Block broadcast_561 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
19:49:02.071 INFO  MemoryStore - Block broadcast_561_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
19:49:02.071 INFO  BlockManagerInfo - Added broadcast_561_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.8 MiB)
19:49:02.071 INFO  SparkContext - Created broadcast 561 from newAPIHadoopFile at PathSplitSource.java:96
19:49:02.093 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
19:49:02.094 INFO  DAGScheduler - Got job 211 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
19:49:02.094 INFO  DAGScheduler - Final stage: ResultStage 275 (collect at ReadsSparkSourceUnitTest.java:111)
19:49:02.094 INFO  DAGScheduler - Parents of final stage: List()
19:49:02.094 INFO  DAGScheduler - Missing parents: List()
19:49:02.094 INFO  DAGScheduler - Submitting ResultStage 275 (ParallelCollectionRDD[1331] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
19:49:02.094 INFO  MemoryStore - Block broadcast_562 stored as values in memory (estimated size 3.0 KiB, free 1918.6 MiB)
19:49:02.094 INFO  MemoryStore - Block broadcast_562_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.6 MiB)
19:49:02.095 INFO  BlockManagerInfo - Added broadcast_562_piece0 in memory on localhost:36125 (size: 1778.0 B, free: 1919.7 MiB)
19:49:02.095 INFO  SparkContext - Created broadcast 562 from broadcast at DAGScheduler.scala:1580
19:49:02.095 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 275 (ParallelCollectionRDD[1331] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
19:49:02.095 INFO  TaskSchedulerImpl - Adding task set 275.0 with 4 tasks resource profile 0
19:49:02.095 INFO  TaskSetManager - Starting task 0.0 in stage 275.0 (TID 346) (localhost, executor driver, partition 0, PROCESS_LOCAL, 47080 bytes) 
19:49:02.096 INFO  TaskSetManager - Starting task 1.0 in stage 275.0 (TID 347) (localhost, executor driver, partition 1, PROCESS_LOCAL, 47194 bytes) 
19:49:02.096 INFO  TaskSetManager - Starting task 2.0 in stage 275.0 (TID 348) (localhost, executor driver, partition 2, PROCESS_LOCAL, 47105 bytes) 
19:49:02.096 INFO  TaskSetManager - Starting task 3.0 in stage 275.0 (TID 349) (localhost, executor driver, partition 3, PROCESS_LOCAL, 47530 bytes) 
19:49:02.096 INFO  Executor - Running task 0.0 in stage 275.0 (TID 346)
19:49:02.096 INFO  Executor - Running task 2.0 in stage 275.0 (TID 348)
19:49:02.096 INFO  Executor - Running task 3.0 in stage 275.0 (TID 349)
19:49:02.096 INFO  Executor - Running task 1.0 in stage 275.0 (TID 347)
19:49:02.098 INFO  Executor - Finished task 0.0 in stage 275.0 (TID 346). 40312 bytes result sent to driver
19:49:02.098 INFO  Executor - Finished task 2.0 in stage 275.0 (TID 348). 40337 bytes result sent to driver
19:49:02.098 INFO  Executor - Finished task 1.0 in stage 275.0 (TID 347). 40426 bytes result sent to driver
19:49:02.098 INFO  Executor - Finished task 3.0 in stage 275.0 (TID 349). 40762 bytes result sent to driver
19:49:02.098 INFO  TaskSetManager - Finished task 2.0 in stage 275.0 (TID 348) in 2 ms on localhost (executor driver) (1/4)
19:49:02.098 INFO  TaskSetManager - Finished task 1.0 in stage 275.0 (TID 347) in 2 ms on localhost (executor driver) (2/4)
19:49:02.098 INFO  TaskSetManager - Finished task 0.0 in stage 275.0 (TID 346) in 3 ms on localhost (executor driver) (3/4)
19:49:02.099 INFO  TaskSetManager - Finished task 3.0 in stage 275.0 (TID 349) in 3 ms on localhost (executor driver) (4/4)
19:49:02.099 INFO  TaskSchedulerImpl - Removed TaskSet 275.0, whose tasks have all completed, from pool 
19:49:02.099 INFO  DAGScheduler - ResultStage 275 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.005 s
19:49:02.099 INFO  DAGScheduler - Job 211 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:02.099 INFO  TaskSchedulerImpl - Killing all running tasks in stage 275: Stage finished
19:49:02.099 INFO  DAGScheduler - Job 211 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.005396 s
19:49:02.099 INFO  FileInputFormat - Total input files to process : 1
19:49:02.135 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
19:49:02.135 INFO  DAGScheduler - Got job 212 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
19:49:02.135 INFO  DAGScheduler - Final stage: ResultStage 276 (collect at ReadsSparkSourceUnitTest.java:112)
19:49:02.135 INFO  DAGScheduler - Parents of final stage: List()
19:49:02.135 INFO  DAGScheduler - Missing parents: List()
19:49:02.135 INFO  DAGScheduler - Submitting ResultStage 276 (MapPartitionsRDD[1338] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:02.151 INFO  MemoryStore - Block broadcast_563 stored as values in memory (estimated size 426.2 KiB, free 1918.2 MiB)
19:49:02.153 INFO  MemoryStore - Block broadcast_563_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1918.0 MiB)
19:49:02.153 INFO  BlockManagerInfo - Added broadcast_563_piece0 in memory on localhost:36125 (size: 153.7 KiB, free: 1919.6 MiB)
19:49:02.153 INFO  SparkContext - Created broadcast 563 from broadcast at DAGScheduler.scala:1580
19:49:02.153 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 276 (MapPartitionsRDD[1338] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:02.153 INFO  TaskSchedulerImpl - Adding task set 276.0 with 1 tasks resource profile 0
19:49:02.153 INFO  TaskSetManager - Starting task 0.0 in stage 276.0 (TID 350) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes) 
19:49:02.154 INFO  Executor - Running task 0.0 in stage 276.0 (TID 350)
19:49:02.183 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
19:49:02.188 INFO  Executor - Finished task 0.0 in stage 276.0 (TID 350). 159572 bytes result sent to driver
19:49:02.189 INFO  TaskSetManager - Finished task 0.0 in stage 276.0 (TID 350) in 36 ms on localhost (executor driver) (1/1)
19:49:02.189 INFO  TaskSchedulerImpl - Removed TaskSet 276.0, whose tasks have all completed, from pool 
19:49:02.189 INFO  DAGScheduler - ResultStage 276 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.054 s
19:49:02.189 INFO  DAGScheduler - Job 212 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:02.189 INFO  TaskSchedulerImpl - Killing all running tasks in stage 276: Stage finished
19:49:02.189 INFO  DAGScheduler - Job 212 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.054494 s
19:49:02.191 INFO  MemoryStore - Block broadcast_564 stored as values in memory (estimated size 536.0 B, free 1918.0 MiB)
19:49:02.192 INFO  MemoryStore - Block broadcast_564_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.0 MiB)
19:49:02.192 INFO  BlockManagerInfo - Added broadcast_564_piece0 in memory on localhost:36125 (size: 187.0 B, free: 1919.6 MiB)
19:49:02.192 INFO  SparkContext - Created broadcast 564 from broadcast at CramSource.java:114
19:49:02.193 INFO  MemoryStore - Block broadcast_565 stored as values in memory (estimated size 297.9 KiB, free 1917.7 MiB)
19:49:02.199 INFO  MemoryStore - Block broadcast_565_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.7 MiB)
19:49:02.199 INFO  BlockManagerInfo - Added broadcast_565_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.6 MiB)
19:49:02.199 INFO  SparkContext - Created broadcast 565 from newAPIHadoopFile at PathSplitSource.java:96
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
19:49:02.221 INFO  MemoryStore - Block broadcast_566 stored as values in memory (estimated size 536.0 B, free 1917.7 MiB)
19:49:02.222 INFO  MemoryStore - Block broadcast_566_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.7 MiB)
19:49:02.222 INFO  BlockManagerInfo - Added broadcast_566_piece0 in memory on localhost:36125 (size: 187.0 B, free: 1919.6 MiB)
19:49:02.222 INFO  SparkContext - Created broadcast 566 from broadcast at CramSource.java:114
19:49:02.223 INFO  MemoryStore - Block broadcast_567 stored as values in memory (estimated size 297.9 KiB, free 1917.4 MiB)
19:49:02.229 INFO  MemoryStore - Block broadcast_567_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.3 MiB)
19:49:02.229 INFO  BlockManagerInfo - Added broadcast_567_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.5 MiB)
19:49:02.229 INFO  SparkContext - Created broadcast 567 from newAPIHadoopFile at PathSplitSource.java:96
19:49:02.246 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
19:49:02.246 INFO  DAGScheduler - Got job 213 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
19:49:02.246 INFO  DAGScheduler - Final stage: ResultStage 277 (collect at ReadsSparkSourceUnitTest.java:111)
19:49:02.246 INFO  DAGScheduler - Parents of final stage: List()
19:49:02.246 INFO  DAGScheduler - Missing parents: List()
19:49:02.246 INFO  DAGScheduler - Submitting ResultStage 277 (ParallelCollectionRDD[1343] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
19:49:02.246 INFO  MemoryStore - Block broadcast_568 stored as values in memory (estimated size 3.0 KiB, free 1917.3 MiB)
19:49:02.247 INFO  MemoryStore - Block broadcast_568_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1917.3 MiB)
19:49:02.247 INFO  BlockManagerInfo - Added broadcast_568_piece0 in memory on localhost:36125 (size: 1778.0 B, free: 1919.5 MiB)
19:49:02.247 INFO  SparkContext - Created broadcast 568 from broadcast at DAGScheduler.scala:1580
19:49:02.247 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 277 (ParallelCollectionRDD[1343] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
19:49:02.247 INFO  TaskSchedulerImpl - Adding task set 277.0 with 4 tasks resource profile 0
19:49:02.248 INFO  TaskSetManager - Starting task 0.0 in stage 277.0 (TID 351) (localhost, executor driver, partition 0, PROCESS_LOCAL, 45675 bytes) 
19:49:02.248 INFO  TaskSetManager - Starting task 1.0 in stage 277.0 (TID 352) (localhost, executor driver, partition 1, PROCESS_LOCAL, 45855 bytes) 
19:49:02.249 INFO  TaskSetManager - Starting task 2.0 in stage 277.0 (TID 353) (localhost, executor driver, partition 2, PROCESS_LOCAL, 45795 bytes) 
19:49:02.249 INFO  TaskSetManager - Starting task 3.0 in stage 277.0 (TID 354) (localhost, executor driver, partition 3, PROCESS_LOCAL, 46068 bytes) 
19:49:02.249 INFO  Executor - Running task 0.0 in stage 277.0 (TID 351)
19:49:02.249 INFO  Executor - Running task 1.0 in stage 277.0 (TID 352)
19:49:02.249 INFO  Executor - Running task 2.0 in stage 277.0 (TID 353)
19:49:02.249 INFO  Executor - Running task 3.0 in stage 277.0 (TID 354)
19:49:02.251 INFO  Executor - Finished task 3.0 in stage 277.0 (TID 354). 39300 bytes result sent to driver
19:49:02.251 INFO  Executor - Finished task 1.0 in stage 277.0 (TID 352). 39130 bytes result sent to driver
19:49:02.251 INFO  Executor - Finished task 2.0 in stage 277.0 (TID 353). 39027 bytes result sent to driver
19:49:02.251 INFO  Executor - Finished task 0.0 in stage 277.0 (TID 351). 38950 bytes result sent to driver
19:49:02.251 INFO  TaskSetManager - Finished task 3.0 in stage 277.0 (TID 354) in 2 ms on localhost (executor driver) (1/4)
19:49:02.252 INFO  TaskSetManager - Finished task 1.0 in stage 277.0 (TID 352) in 4 ms on localhost (executor driver) (2/4)
19:49:02.252 INFO  TaskSetManager - Finished task 0.0 in stage 277.0 (TID 351) in 4 ms on localhost (executor driver) (3/4)
19:49:02.252 INFO  TaskSetManager - Finished task 2.0 in stage 277.0 (TID 353) in 4 ms on localhost (executor driver) (4/4)
19:49:02.252 INFO  TaskSchedulerImpl - Removed TaskSet 277.0, whose tasks have all completed, from pool 
19:49:02.252 INFO  DAGScheduler - ResultStage 277 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.006 s
19:49:02.252 INFO  DAGScheduler - Job 213 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:02.252 INFO  TaskSchedulerImpl - Killing all running tasks in stage 277: Stage finished
19:49:02.252 INFO  DAGScheduler - Job 213 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.006661 s
19:49:02.253 INFO  FileInputFormat - Total input files to process : 1
19:49:02.294 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
19:49:02.294 INFO  DAGScheduler - Got job 214 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
19:49:02.294 INFO  DAGScheduler - Final stage: ResultStage 278 (collect at ReadsSparkSourceUnitTest.java:112)
19:49:02.294 INFO  DAGScheduler - Parents of final stage: List()
19:49:02.294 INFO  DAGScheduler - Missing parents: List()
19:49:02.294 INFO  DAGScheduler - Submitting ResultStage 278 (MapPartitionsRDD[1349] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:02.314 INFO  MemoryStore - Block broadcast_569 stored as values in memory (estimated size 286.8 KiB, free 1917.1 MiB)
19:49:02.315 INFO  MemoryStore - Block broadcast_569_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1917.0 MiB)
19:49:02.315 INFO  BlockManagerInfo - Added broadcast_569_piece0 in memory on localhost:36125 (size: 103.6 KiB, free: 1919.4 MiB)
19:49:02.315 INFO  SparkContext - Created broadcast 569 from broadcast at DAGScheduler.scala:1580
19:49:02.315 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 278 (MapPartitionsRDD[1349] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:02.315 INFO  TaskSchedulerImpl - Adding task set 278.0 with 1 tasks resource profile 0
19:49:02.316 INFO  TaskSetManager - Starting task 0.0 in stage 278.0 (TID 355) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes) 
19:49:02.316 INFO  Executor - Running task 0.0 in stage 278.0 (TID 355)
19:49:02.337 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
19:49:02.346 INFO  Executor - Finished task 0.0 in stage 278.0 (TID 355). 154101 bytes result sent to driver
19:49:02.346 INFO  TaskSetManager - Finished task 0.0 in stage 278.0 (TID 355) in 30 ms on localhost (executor driver) (1/1)
19:49:02.346 INFO  TaskSchedulerImpl - Removed TaskSet 278.0, whose tasks have all completed, from pool 
19:49:02.346 INFO  DAGScheduler - ResultStage 278 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.051 s
19:49:02.346 INFO  DAGScheduler - Job 214 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:02.346 INFO  TaskSchedulerImpl - Killing all running tasks in stage 278: Stage finished
19:49:02.347 INFO  DAGScheduler - Job 214 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.052480 s
19:49:02.349 INFO  MemoryStore - Block broadcast_570 stored as values in memory (estimated size 297.9 KiB, free 1916.7 MiB)
19:49:02.356 INFO  BlockManagerInfo - Removed broadcast_560_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.4 MiB)
19:49:02.356 INFO  BlockManagerInfo - Removed broadcast_569_piece0 on localhost:36125 in memory (size: 103.6 KiB, free: 1919.5 MiB)
19:49:02.357 INFO  BlockManagerInfo - Removed broadcast_559_piece0 on localhost:36125 in memory (size: 103.6 KiB, free: 1919.6 MiB)
19:49:02.357 INFO  BlockManagerInfo - Removed broadcast_567_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.7 MiB)
19:49:02.357 INFO  BlockManagerInfo - Removed broadcast_556_piece0 on localhost:36125 in memory (size: 206.0 B, free: 1919.7 MiB)
19:49:02.358 INFO  BlockManagerInfo - Removed broadcast_561_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.7 MiB)
19:49:02.358 INFO  BlockManagerInfo - Removed broadcast_566_piece0 on localhost:36125 in memory (size: 187.0 B, free: 1919.7 MiB)
19:49:02.358 INFO  BlockManagerInfo - Removed broadcast_562_piece0 on localhost:36125 in memory (size: 1778.0 B, free: 1919.8 MiB)
19:49:02.361 INFO  BlockManagerInfo - Removed broadcast_557_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.8 MiB)
19:49:02.361 INFO  BlockManagerInfo - Removed broadcast_568_piece0 on localhost:36125 in memory (size: 1778.0 B, free: 1919.8 MiB)
19:49:02.362 INFO  BlockManagerInfo - Removed broadcast_563_piece0 on localhost:36125 in memory (size: 153.7 KiB, free: 1920.0 MiB)
19:49:02.362 INFO  BlockManagerInfo - Removed broadcast_564_piece0 on localhost:36125 in memory (size: 187.0 B, free: 1920.0 MiB)
19:49:02.362 INFO  BlockManagerInfo - Removed broadcast_565_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1920.0 MiB)
19:49:02.363 INFO  MemoryStore - Block broadcast_570_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
19:49:02.363 INFO  BlockManagerInfo - Added broadcast_570_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1920.0 MiB)
19:49:02.363 INFO  SparkContext - Created broadcast 570 from newAPIHadoopFile at PathSplitSource.java:96
19:49:02.384 INFO  MemoryStore - Block broadcast_571 stored as values in memory (estimated size 297.9 KiB, free 1919.4 MiB)
19:49:02.391 INFO  MemoryStore - Block broadcast_571_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.3 MiB)
19:49:02.391 INFO  BlockManagerInfo - Added broadcast_571_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.9 MiB)
19:49:02.391 INFO  SparkContext - Created broadcast 571 from newAPIHadoopFile at PathSplitSource.java:96
19:49:02.413 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
19:49:02.413 INFO  DAGScheduler - Got job 215 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
19:49:02.413 INFO  DAGScheduler - Final stage: ResultStage 279 (collect at ReadsSparkSourceUnitTest.java:111)
19:49:02.413 INFO  DAGScheduler - Parents of final stage: List()
19:49:02.413 INFO  DAGScheduler - Missing parents: List()
19:49:02.414 INFO  DAGScheduler - Submitting ResultStage 279 (ParallelCollectionRDD[1355] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
19:49:02.414 INFO  MemoryStore - Block broadcast_572 stored as values in memory (estimated size 3.0 KiB, free 1919.3 MiB)
19:49:02.414 INFO  MemoryStore - Block broadcast_572_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1919.3 MiB)
19:49:02.414 INFO  BlockManagerInfo - Added broadcast_572_piece0 in memory on localhost:36125 (size: 1778.0 B, free: 1919.9 MiB)
19:49:02.415 INFO  SparkContext - Created broadcast 572 from broadcast at DAGScheduler.scala:1580
19:49:02.415 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 279 (ParallelCollectionRDD[1355] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
19:49:02.415 INFO  TaskSchedulerImpl - Adding task set 279.0 with 4 tasks resource profile 0
19:49:02.415 INFO  TaskSetManager - Starting task 0.0 in stage 279.0 (TID 356) (localhost, executor driver, partition 0, PROCESS_LOCAL, 47080 bytes) 
19:49:02.416 INFO  TaskSetManager - Starting task 1.0 in stage 279.0 (TID 357) (localhost, executor driver, partition 1, PROCESS_LOCAL, 47194 bytes) 
19:49:02.416 INFO  TaskSetManager - Starting task 2.0 in stage 279.0 (TID 358) (localhost, executor driver, partition 2, PROCESS_LOCAL, 47105 bytes) 
19:49:02.416 INFO  TaskSetManager - Starting task 3.0 in stage 279.0 (TID 359) (localhost, executor driver, partition 3, PROCESS_LOCAL, 47530 bytes) 
19:49:02.416 INFO  Executor - Running task 0.0 in stage 279.0 (TID 356)
19:49:02.416 INFO  Executor - Running task 1.0 in stage 279.0 (TID 357)
19:49:02.416 INFO  Executor - Running task 3.0 in stage 279.0 (TID 359)
19:49:02.416 INFO  Executor - Running task 2.0 in stage 279.0 (TID 358)
19:49:02.418 INFO  Executor - Finished task 1.0 in stage 279.0 (TID 357). 40426 bytes result sent to driver
19:49:02.418 INFO  Executor - Finished task 0.0 in stage 279.0 (TID 356). 40312 bytes result sent to driver
19:49:02.418 INFO  Executor - Finished task 3.0 in stage 279.0 (TID 359). 40762 bytes result sent to driver
19:49:02.418 INFO  TaskSetManager - Finished task 0.0 in stage 279.0 (TID 356) in 3 ms on localhost (executor driver) (1/4)
19:49:02.418 INFO  Executor - Finished task 2.0 in stage 279.0 (TID 358). 40380 bytes result sent to driver
19:49:02.418 INFO  TaskSetManager - Finished task 3.0 in stage 279.0 (TID 359) in 2 ms on localhost (executor driver) (2/4)
19:49:02.419 INFO  TaskSetManager - Finished task 2.0 in stage 279.0 (TID 358) in 3 ms on localhost (executor driver) (3/4)
19:49:02.419 INFO  TaskSetManager - Finished task 1.0 in stage 279.0 (TID 357) in 4 ms on localhost (executor driver) (4/4)
19:49:02.419 INFO  TaskSchedulerImpl - Removed TaskSet 279.0, whose tasks have all completed, from pool 
19:49:02.419 INFO  DAGScheduler - ResultStage 279 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.005 s
19:49:02.419 INFO  DAGScheduler - Job 215 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:02.419 INFO  TaskSchedulerImpl - Killing all running tasks in stage 279: Stage finished
19:49:02.419 INFO  DAGScheduler - Job 215 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.005988 s
19:49:02.420 INFO  FileInputFormat - Total input files to process : 1
19:49:02.456 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
19:49:02.456 INFO  DAGScheduler - Got job 216 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
19:49:02.456 INFO  DAGScheduler - Final stage: ResultStage 280 (collect at ReadsSparkSourceUnitTest.java:112)
19:49:02.456 INFO  DAGScheduler - Parents of final stage: List()
19:49:02.456 INFO  DAGScheduler - Missing parents: List()
19:49:02.456 INFO  DAGScheduler - Submitting ResultStage 280 (MapPartitionsRDD[1362] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:02.472 INFO  MemoryStore - Block broadcast_573 stored as values in memory (estimated size 426.2 KiB, free 1918.9 MiB)
19:49:02.473 INFO  MemoryStore - Block broadcast_573_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1918.7 MiB)
19:49:02.474 INFO  BlockManagerInfo - Added broadcast_573_piece0 in memory on localhost:36125 (size: 153.7 KiB, free: 1919.8 MiB)
19:49:02.474 INFO  SparkContext - Created broadcast 573 from broadcast at DAGScheduler.scala:1580
19:49:02.474 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 280 (MapPartitionsRDD[1362] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:02.474 INFO  TaskSchedulerImpl - Adding task set 280.0 with 1 tasks resource profile 0
19:49:02.474 INFO  TaskSetManager - Starting task 0.0 in stage 280.0 (TID 360) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes) 
19:49:02.474 INFO  Executor - Running task 0.0 in stage 280.0 (TID 360)
19:49:02.504 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
19:49:02.507 INFO  Executor - Finished task 0.0 in stage 280.0 (TID 360). 159572 bytes result sent to driver
19:49:02.507 INFO  TaskSetManager - Finished task 0.0 in stage 280.0 (TID 360) in 33 ms on localhost (executor driver) (1/1)
19:49:02.507 INFO  TaskSchedulerImpl - Removed TaskSet 280.0, whose tasks have all completed, from pool 
19:49:02.507 INFO  DAGScheduler - ResultStage 280 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.051 s
19:49:02.507 INFO  DAGScheduler - Job 216 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:02.508 INFO  TaskSchedulerImpl - Killing all running tasks in stage 280: Stage finished
19:49:02.508 INFO  DAGScheduler - Job 216 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.052048 s
19:49:02.510 INFO  MemoryStore - Block broadcast_574 stored as values in memory (estimated size 536.0 B, free 1918.7 MiB)
19:49:02.510 INFO  MemoryStore - Block broadcast_574_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.7 MiB)
19:49:02.510 INFO  BlockManagerInfo - Added broadcast_574_piece0 in memory on localhost:36125 (size: 187.0 B, free: 1919.7 MiB)
19:49:02.510 INFO  SparkContext - Created broadcast 574 from broadcast at CramSource.java:114
19:49:02.511 INFO  MemoryStore - Block broadcast_575 stored as values in memory (estimated size 297.9 KiB, free 1918.5 MiB)
19:49:02.517 INFO  MemoryStore - Block broadcast_575_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.4 MiB)
19:49:02.517 INFO  BlockManagerInfo - Added broadcast_575_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.7 MiB)
19:49:02.517 INFO  SparkContext - Created broadcast 575 from newAPIHadoopFile at PathSplitSource.java:96
19:49:02.536 INFO  MemoryStore - Block broadcast_576 stored as values in memory (estimated size 536.0 B, free 1918.4 MiB)
19:49:02.537 INFO  MemoryStore - Block broadcast_576_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.4 MiB)
19:49:02.537 INFO  BlockManagerInfo - Added broadcast_576_piece0 in memory on localhost:36125 (size: 187.0 B, free: 1919.7 MiB)
19:49:02.537 INFO  SparkContext - Created broadcast 576 from broadcast at CramSource.java:114
19:49:02.538 INFO  MemoryStore - Block broadcast_577 stored as values in memory (estimated size 297.9 KiB, free 1918.1 MiB)
19:49:02.544 INFO  MemoryStore - Block broadcast_577_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.1 MiB)
19:49:02.544 INFO  BlockManagerInfo - Added broadcast_577_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.7 MiB)
19:49:02.544 INFO  SparkContext - Created broadcast 577 from newAPIHadoopFile at PathSplitSource.java:96
19:49:02.560 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
19:49:02.561 INFO  DAGScheduler - Got job 217 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
19:49:02.561 INFO  DAGScheduler - Final stage: ResultStage 281 (collect at ReadsSparkSourceUnitTest.java:111)
19:49:02.561 INFO  DAGScheduler - Parents of final stage: List()
19:49:02.561 INFO  DAGScheduler - Missing parents: List()
19:49:02.561 INFO  DAGScheduler - Submitting ResultStage 281 (ParallelCollectionRDD[1367] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
19:49:02.561 INFO  MemoryStore - Block broadcast_578 stored as values in memory (estimated size 3.0 KiB, free 1918.1 MiB)
19:49:02.561 INFO  MemoryStore - Block broadcast_578_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.1 MiB)
19:49:02.562 INFO  BlockManagerInfo - Added broadcast_578_piece0 in memory on localhost:36125 (size: 1778.0 B, free: 1919.7 MiB)
19:49:02.562 INFO  SparkContext - Created broadcast 578 from broadcast at DAGScheduler.scala:1580
19:49:02.562 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 281 (ParallelCollectionRDD[1367] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
19:49:02.562 INFO  TaskSchedulerImpl - Adding task set 281.0 with 4 tasks resource profile 0
19:49:02.562 INFO  TaskSetManager - Starting task 0.0 in stage 281.0 (TID 361) (localhost, executor driver, partition 0, PROCESS_LOCAL, 45675 bytes) 
19:49:02.563 INFO  TaskSetManager - Starting task 1.0 in stage 281.0 (TID 362) (localhost, executor driver, partition 1, PROCESS_LOCAL, 45855 bytes) 
19:49:02.563 INFO  TaskSetManager - Starting task 2.0 in stage 281.0 (TID 363) (localhost, executor driver, partition 2, PROCESS_LOCAL, 45795 bytes) 
19:49:02.563 INFO  TaskSetManager - Starting task 3.0 in stage 281.0 (TID 364) (localhost, executor driver, partition 3, PROCESS_LOCAL, 46068 bytes) 
19:49:02.563 INFO  Executor - Running task 2.0 in stage 281.0 (TID 363)
19:49:02.563 INFO  Executor - Running task 3.0 in stage 281.0 (TID 364)
19:49:02.563 INFO  Executor - Running task 1.0 in stage 281.0 (TID 362)
19:49:02.563 INFO  Executor - Running task 0.0 in stage 281.0 (TID 361)
19:49:02.565 INFO  Executor - Finished task 0.0 in stage 281.0 (TID 361). 38907 bytes result sent to driver
19:49:02.565 INFO  Executor - Finished task 1.0 in stage 281.0 (TID 362). 39087 bytes result sent to driver
19:49:02.565 INFO  Executor - Finished task 2.0 in stage 281.0 (TID 363). 39027 bytes result sent to driver
19:49:02.565 INFO  Executor - Finished task 3.0 in stage 281.0 (TID 364). 39343 bytes result sent to driver
19:49:02.565 INFO  TaskSetManager - Finished task 1.0 in stage 281.0 (TID 362) in 3 ms on localhost (executor driver) (1/4)
19:49:02.566 INFO  TaskSetManager - Finished task 2.0 in stage 281.0 (TID 363) in 2 ms on localhost (executor driver) (2/4)
19:49:02.566 INFO  TaskSetManager - Finished task 0.0 in stage 281.0 (TID 361) in 4 ms on localhost (executor driver) (3/4)
19:49:02.566 INFO  TaskSetManager - Finished task 3.0 in stage 281.0 (TID 364) in 3 ms on localhost (executor driver) (4/4)
19:49:02.566 INFO  TaskSchedulerImpl - Removed TaskSet 281.0, whose tasks have all completed, from pool 
19:49:02.566 INFO  DAGScheduler - ResultStage 281 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.005 s
19:49:02.566 INFO  DAGScheduler - Job 217 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:02.566 INFO  TaskSchedulerImpl - Killing all running tasks in stage 281: Stage finished
19:49:02.566 INFO  DAGScheduler - Job 217 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.005527 s
19:49:02.567 INFO  FileInputFormat - Total input files to process : 1
19:49:02.591 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
19:49:02.591 INFO  DAGScheduler - Got job 218 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
19:49:02.591 INFO  DAGScheduler - Final stage: ResultStage 282 (collect at ReadsSparkSourceUnitTest.java:112)
19:49:02.591 INFO  DAGScheduler - Parents of final stage: List()
19:49:02.591 INFO  DAGScheduler - Missing parents: List()
19:49:02.591 INFO  DAGScheduler - Submitting ResultStage 282 (MapPartitionsRDD[1373] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:02.602 INFO  MemoryStore - Block broadcast_579 stored as values in memory (estimated size 286.8 KiB, free 1917.8 MiB)
19:49:02.603 INFO  MemoryStore - Block broadcast_579_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1917.7 MiB)
19:49:02.604 INFO  BlockManagerInfo - Added broadcast_579_piece0 in memory on localhost:36125 (size: 103.6 KiB, free: 1919.5 MiB)
19:49:02.604 INFO  SparkContext - Created broadcast 579 from broadcast at DAGScheduler.scala:1580
19:49:02.604 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 282 (MapPartitionsRDD[1373] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:02.604 INFO  TaskSchedulerImpl - Adding task set 282.0 with 1 tasks resource profile 0
19:49:02.604 INFO  TaskSetManager - Starting task 0.0 in stage 282.0 (TID 365) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes) 
19:49:02.604 INFO  Executor - Running task 0.0 in stage 282.0 (TID 365)
19:49:02.624 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
19:49:02.635 INFO  BlockManagerInfo - Removed broadcast_574_piece0 on localhost:36125 in memory (size: 187.0 B, free: 1919.5 MiB)
19:49:02.635 INFO  Executor - Finished task 0.0 in stage 282.0 (TID 365). 154144 bytes result sent to driver
19:49:02.636 INFO  BlockManagerInfo - Removed broadcast_578_piece0 on localhost:36125 in memory (size: 1778.0 B, free: 1919.6 MiB)
19:49:02.636 INFO  TaskSetManager - Finished task 0.0 in stage 282.0 (TID 365) in 32 ms on localhost (executor driver) (1/1)
19:49:02.636 INFO  TaskSchedulerImpl - Removed TaskSet 282.0, whose tasks have all completed, from pool 
19:49:02.636 INFO  BlockManagerInfo - Removed broadcast_573_piece0 on localhost:36125 in memory (size: 153.7 KiB, free: 1919.7 MiB)
19:49:02.636 INFO  DAGScheduler - ResultStage 282 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.044 s
19:49:02.637 INFO  DAGScheduler - Job 218 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:02.637 INFO  TaskSchedulerImpl - Killing all running tasks in stage 282: Stage finished
19:49:02.637 INFO  DAGScheduler - Job 218 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.045427 s
19:49:02.637 INFO  BlockManagerInfo - Removed broadcast_572_piece0 on localhost:36125 in memory (size: 1778.0 B, free: 1919.7 MiB)
19:49:02.638 INFO  BlockManagerInfo - Removed broadcast_570_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.8 MiB)
19:49:02.639 INFO  BlockManagerInfo - Removed broadcast_571_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.8 MiB)
19:49:02.639 INFO  BlockManagerInfo - Removed broadcast_575_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.8 MiB)
19:49:02.639 INFO  MemoryStore - Block broadcast_580 stored as values in memory (estimated size 297.9 KiB, free 1919.0 MiB)
19:49:02.646 INFO  MemoryStore - Block broadcast_580_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.9 MiB)
19:49:02.646 INFO  BlockManagerInfo - Added broadcast_580_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.8 MiB)
19:49:02.646 INFO  SparkContext - Created broadcast 580 from newAPIHadoopFile at PathSplitSource.java:96
19:49:02.667 INFO  MemoryStore - Block broadcast_581 stored as values in memory (estimated size 536.0 B, free 1918.9 MiB)
19:49:02.668 INFO  MemoryStore - Block broadcast_581_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.9 MiB)
19:49:02.668 INFO  BlockManagerInfo - Added broadcast_581_piece0 in memory on localhost:36125 (size: 187.0 B, free: 1919.8 MiB)
19:49:02.668 INFO  SparkContext - Created broadcast 581 from broadcast at CramSource.java:114
19:49:02.669 INFO  MemoryStore - Block broadcast_582 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
19:49:02.675 INFO  MemoryStore - Block broadcast_582_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
19:49:02.675 INFO  BlockManagerInfo - Added broadcast_582_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.8 MiB)
19:49:02.675 INFO  SparkContext - Created broadcast 582 from newAPIHadoopFile at PathSplitSource.java:96
19:49:02.698 WARN  FileSystem - Failed to initialize filesystem hdfs://bogus/path.bam: java.lang.IllegalArgumentException: java.net.UnknownHostException: bogus
19:49:02.701 INFO  MemoryStore - Block broadcast_583 stored as values in memory (estimated size 298.0 KiB, free 1918.3 MiB)
19:49:02.707 INFO  MemoryStore - Block broadcast_583_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.3 MiB)
19:49:02.707 INFO  BlockManagerInfo - Added broadcast_583_piece0 in memory on localhost:36125 (size: 50.3 KiB, free: 1919.7 MiB)
19:49:02.707 INFO  SparkContext - Created broadcast 583 from newAPIHadoopFile at PathSplitSource.java:96
19:49:02.732 INFO  MemoryStore - Block broadcast_584 stored as values in memory (estimated size 298.0 KiB, free 1918.0 MiB)
19:49:02.738 INFO  MemoryStore - Block broadcast_584_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.9 MiB)
19:49:02.738 INFO  BlockManagerInfo - Added broadcast_584_piece0 in memory on localhost:36125 (size: 50.3 KiB, free: 1919.7 MiB)
19:49:02.738 INFO  SparkContext - Created broadcast 584 from newAPIHadoopFile at PathSplitSource.java:96
19:49:02.761 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:154
19:49:02.761 INFO  DAGScheduler - Got job 219 (collect at ReadsSparkSourceUnitTest.java:154) with 4 output partitions
19:49:02.761 INFO  DAGScheduler - Final stage: ResultStage 283 (collect at ReadsSparkSourceUnitTest.java:154)
19:49:02.761 INFO  DAGScheduler - Parents of final stage: List()
19:49:02.761 INFO  DAGScheduler - Missing parents: List()
19:49:02.761 INFO  DAGScheduler - Submitting ResultStage 283 (ParallelCollectionRDD[1388] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
19:49:02.761 INFO  MemoryStore - Block broadcast_585 stored as values in memory (estimated size 3.0 KiB, free 1917.9 MiB)
19:49:02.762 INFO  MemoryStore - Block broadcast_585_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1917.9 MiB)
19:49:02.762 INFO  BlockManagerInfo - Added broadcast_585_piece0 in memory on localhost:36125 (size: 1778.0 B, free: 1919.7 MiB)
19:49:02.762 INFO  SparkContext - Created broadcast 585 from broadcast at DAGScheduler.scala:1580
19:49:02.762 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 283 (ParallelCollectionRDD[1388] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
19:49:02.762 INFO  TaskSchedulerImpl - Adding task set 283.0 with 4 tasks resource profile 0
19:49:02.763 INFO  TaskSetManager - Starting task 0.0 in stage 283.0 (TID 366) (localhost, executor driver, partition 0, PROCESS_LOCAL, 169886 bytes) 
19:49:02.763 INFO  TaskSetManager - Starting task 1.0 in stage 283.0 (TID 367) (localhost, executor driver, partition 1, PROCESS_LOCAL, 170093 bytes) 
19:49:02.763 INFO  TaskSetManager - Starting task 2.0 in stage 283.0 (TID 368) (localhost, executor driver, partition 2, PROCESS_LOCAL, 170081 bytes) 
19:49:02.764 INFO  TaskSetManager - Starting task 3.0 in stage 283.0 (TID 369) (localhost, executor driver, partition 3, PROCESS_LOCAL, 169419 bytes) 
19:49:02.764 INFO  Executor - Running task 0.0 in stage 283.0 (TID 366)
19:49:02.764 INFO  Executor - Running task 3.0 in stage 283.0 (TID 369)
19:49:02.764 INFO  Executor - Running task 1.0 in stage 283.0 (TID 367)
19:49:02.764 INFO  Executor - Running task 2.0 in stage 283.0 (TID 368)
19:49:02.766 INFO  Executor - Finished task 2.0 in stage 283.0 (TID 368). 163356 bytes result sent to driver
19:49:02.766 INFO  Executor - Finished task 3.0 in stage 283.0 (TID 369). 162694 bytes result sent to driver
19:49:02.767 INFO  Executor - Finished task 1.0 in stage 283.0 (TID 367). 163411 bytes result sent to driver
19:49:02.767 INFO  Executor - Finished task 0.0 in stage 283.0 (TID 366). 163161 bytes result sent to driver
19:49:02.767 INFO  TaskSetManager - Finished task 2.0 in stage 283.0 (TID 368) in 4 ms on localhost (executor driver) (1/4)
19:49:02.768 INFO  TaskSetManager - Finished task 3.0 in stage 283.0 (TID 369) in 4 ms on localhost (executor driver) (2/4)
19:49:02.768 INFO  TaskSetManager - Finished task 1.0 in stage 283.0 (TID 367) in 5 ms on localhost (executor driver) (3/4)
19:49:02.769 INFO  TaskSetManager - Finished task 0.0 in stage 283.0 (TID 366) in 5 ms on localhost (executor driver) (4/4)
19:49:02.769 INFO  TaskSchedulerImpl - Removed TaskSet 283.0, whose tasks have all completed, from pool 
19:49:02.769 INFO  DAGScheduler - ResultStage 283 (collect at ReadsSparkSourceUnitTest.java:154) finished in 0.008 s
19:49:02.769 INFO  DAGScheduler - Job 219 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:02.769 INFO  TaskSchedulerImpl - Killing all running tasks in stage 283: Stage finished
19:49:02.769 INFO  DAGScheduler - Job 219 finished: collect at ReadsSparkSourceUnitTest.java:154, took 0.008061 s
19:49:02.778 INFO  FileInputFormat - Total input files to process : 2
19:49:02.814 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:155
19:49:02.814 INFO  DAGScheduler - Got job 220 (collect at ReadsSparkSourceUnitTest.java:155) with 2 output partitions
19:49:02.814 INFO  DAGScheduler - Final stage: ResultStage 284 (collect at ReadsSparkSourceUnitTest.java:155)
19:49:02.814 INFO  DAGScheduler - Parents of final stage: List()
19:49:02.815 INFO  DAGScheduler - Missing parents: List()
19:49:02.815 INFO  DAGScheduler - Submitting ResultStage 284 (MapPartitionsRDD[1395] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:02.831 INFO  MemoryStore - Block broadcast_586 stored as values in memory (estimated size 426.2 KiB, free 1917.5 MiB)
19:49:02.832 INFO  MemoryStore - Block broadcast_586_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1917.3 MiB)
19:49:02.833 INFO  BlockManagerInfo - Added broadcast_586_piece0 in memory on localhost:36125 (size: 153.7 KiB, free: 1919.5 MiB)
19:49:02.833 INFO  SparkContext - Created broadcast 586 from broadcast at DAGScheduler.scala:1580
19:49:02.833 INFO  DAGScheduler - Submitting 2 missing tasks from ResultStage 284 (MapPartitionsRDD[1395] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0, 1))
19:49:02.833 INFO  TaskSchedulerImpl - Adding task set 284.0 with 2 tasks resource profile 0
19:49:02.833 INFO  TaskSetManager - Starting task 0.0 in stage 284.0 (TID 370) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7928 bytes) 
19:49:02.833 INFO  TaskSetManager - Starting task 1.0 in stage 284.0 (TID 371) (localhost, executor driver, partition 1, PROCESS_LOCAL, 7928 bytes) 
19:49:02.834 INFO  Executor - Running task 0.0 in stage 284.0 (TID 370)
19:49:02.834 INFO  Executor - Running task 1.0 in stage 284.0 (TID 371)
19:49:02.868 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.sharded.bam/part-r-00000.bam:0+131967
19:49:02.869 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.sharded.bam/part-r-00001.bam:0+131967
19:49:02.877 INFO  Executor - Finished task 1.0 in stage 284.0 (TID 371). 325590 bytes result sent to driver
19:49:02.878 INFO  TaskSetManager - Finished task 1.0 in stage 284.0 (TID 371) in 45 ms on localhost (executor driver) (1/2)
19:49:02.878 INFO  Executor - Finished task 0.0 in stage 284.0 (TID 370). 325590 bytes result sent to driver
19:49:02.879 INFO  TaskSetManager - Finished task 0.0 in stage 284.0 (TID 370) in 46 ms on localhost (executor driver) (2/2)
19:49:02.879 INFO  TaskSchedulerImpl - Removed TaskSet 284.0, whose tasks have all completed, from pool 
19:49:02.879 INFO  DAGScheduler - ResultStage 284 (collect at ReadsSparkSourceUnitTest.java:155) finished in 0.064 s
19:49:02.879 INFO  DAGScheduler - Job 220 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:02.879 INFO  TaskSchedulerImpl - Killing all running tasks in stage 284: Stage finished
19:49:02.879 INFO  DAGScheduler - Job 220 finished: collect at ReadsSparkSourceUnitTest.java:155, took 0.065119 s
19:49:02.888 INFO  MiniDFSCluster - starting cluster: numNameNodes=1, numDataNodes=1
19:49:02.888 INFO  NameNode - Formatting using clusterid: testClusterID
19:49:02.889 INFO  FSEditLog - Edit logging is async:true
19:49:02.896 INFO  FSNamesystem - KeyProvider: null
19:49:02.896 INFO  FSNamesystem - fsLock is fair: true
19:49:02.896 INFO  FSNamesystem - Detailed lock hold time metrics enabled: false
19:49:02.896 INFO  FSNamesystem - fsOwner                = runner (auth:SIMPLE)
19:49:02.896 INFO  FSNamesystem - supergroup             = supergroup
19:49:02.896 INFO  FSNamesystem - isPermissionEnabled    = true
19:49:02.896 INFO  FSNamesystem - isStoragePolicyEnabled = true
19:49:02.896 INFO  FSNamesystem - HA Enabled: false
19:49:02.896 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
19:49:02.896 INFO  DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
19:49:02.896 INFO  DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
19:49:02.896 INFO  BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
19:49:02.896 INFO  BlockManager - The block deletion will start around 2025 Jul 15 19:49:02
19:49:02.896 INFO  GSet - Computing capacity for map BlocksMap
19:49:02.896 INFO  GSet - VM type       = 64-bit
19:49:02.896 INFO  GSet - 2.0% max memory 3.4 GB = 70 MB
19:49:02.896 INFO  GSet - capacity      = 2^23 = 8388608 entries
19:49:02.901 INFO  BlockManagerInfo - Removed broadcast_580_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.6 MiB)
19:49:02.901 INFO  BlockManagerInfo - Removed broadcast_584_piece0 on localhost:36125 in memory (size: 50.3 KiB, free: 1919.6 MiB)
19:49:02.902 INFO  BlockManagerInfo - Removed broadcast_585_piece0 on localhost:36125 in memory (size: 1778.0 B, free: 1919.6 MiB)
19:49:02.902 INFO  BlockManagerInfo - Removed broadcast_579_piece0 on localhost:36125 in memory (size: 103.6 KiB, free: 1919.7 MiB)
19:49:02.902 INFO  BlockManager - Storage policy satisfier is disabled
19:49:02.902 INFO  BlockManager - dfs.block.access.token.enable = false
19:49:02.902 INFO  BlockManagerInfo - Removed broadcast_581_piece0 on localhost:36125 in memory (size: 187.0 B, free: 1919.7 MiB)
19:49:02.902 INFO  BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
19:49:02.902 INFO  BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
19:49:02.902 INFO  BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
19:49:02.902 INFO  BlockManager - defaultReplication         = 1
19:49:02.902 INFO  BlockManager - maxReplication             = 512
19:49:02.902 INFO  BlockManager - minReplication             = 1
19:49:02.902 INFO  BlockManager - maxReplicationStreams      = 2
19:49:02.902 INFO  BlockManager - redundancyRecheckInterval  = 3000ms
19:49:02.902 INFO  BlockManager - encryptDataTransfer        = false
19:49:02.902 INFO  BlockManager - maxNumBlocksToLog          = 1000
19:49:02.903 INFO  GSet - Computing capacity for map INodeMap
19:49:02.903 INFO  GSet - VM type       = 64-bit
19:49:02.903 INFO  BlockManagerInfo - Removed broadcast_586_piece0 on localhost:36125 in memory (size: 153.7 KiB, free: 1919.9 MiB)
19:49:02.903 INFO  GSet - 1.0% max memory 3.4 GB = 35 MB
19:49:02.903 INFO  GSet - capacity      = 2^22 = 4194304 entries
19:49:02.903 INFO  BlockManagerInfo - Removed broadcast_576_piece0 on localhost:36125 in memory (size: 187.0 B, free: 1919.9 MiB)
19:49:02.904 INFO  BlockManagerInfo - Removed broadcast_577_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.9 MiB)
19:49:02.904 INFO  FSDirectory - ACLs enabled? true
19:49:02.904 INFO  FSDirectory - POSIX ACL inheritance enabled? true
19:49:02.904 INFO  FSDirectory - XAttrs enabled? true
19:49:02.904 INFO  NameNode - Caching file names occurring more than 10 times
19:49:02.904 INFO  SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
19:49:02.904 INFO  SnapshotManager - SkipList is disabled
19:49:02.904 INFO  GSet - Computing capacity for map cachedBlocks
19:49:02.904 INFO  GSet - VM type       = 64-bit
19:49:02.904 INFO  GSet - 0.25% max memory 3.4 GB = 8.8 MB
19:49:02.904 INFO  GSet - capacity      = 2^20 = 1048576 entries
19:49:02.904 INFO  BlockManagerInfo - Removed broadcast_582_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1920.0 MiB)
19:49:02.904 INFO  TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
19:49:02.904 INFO  TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
19:49:02.904 INFO  TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
19:49:02.904 INFO  FSNamesystem - Retry cache on namenode is enabled
19:49:02.904 INFO  FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
19:49:02.904 INFO  BlockManagerInfo - Removed broadcast_583_piece0 on localhost:36125 in memory (size: 50.3 KiB, free: 1920.0 MiB)
19:49:02.904 INFO  GSet - Computing capacity for map NameNodeRetryCache
19:49:02.904 INFO  GSet - VM type       = 64-bit
19:49:02.904 INFO  GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
19:49:02.904 INFO  GSet - capacity      = 2^17 = 131072 entries
19:49:02.905 INFO  FSImage - Allocated new BlockPoolId: BP-1692280096-10.1.0.127-1752608942905
19:49:02.907 INFO  Storage - Storage directory /tmp/minicluster_storage12854353944789317805/name-0-1 has been successfully formatted.
19:49:02.908 INFO  Storage - Storage directory /tmp/minicluster_storage12854353944789317805/name-0-2 has been successfully formatted.
19:49:02.917 INFO  FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage12854353944789317805/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression
19:49:02.918 INFO  FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage12854353944789317805/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression
19:49:02.921 INFO  FSImageFormatProtobuf - Image file /tmp/minicluster_storage12854353944789317805/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
19:49:02.923 INFO  FSImageFormatProtobuf - Image file /tmp/minicluster_storage12854353944789317805/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
19:49:02.924 INFO  NNStorageRetentionManager - Going to retain 1 images with txid >= 0
19:49:02.927 INFO  FSNamesystem - Stopping services started for active state
19:49:02.927 INFO  FSNamesystem - Stopping services started for standby state
19:49:02.928 INFO  NameNode - createNameNode []
19:49:02.929 WARN  MetricsConfig - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
19:49:02.930 INFO  MetricsSystemImpl - Scheduled Metric snapshot period at 10 second(s).
19:49:02.930 INFO  MetricsSystemImpl - NameNode metrics system started
19:49:02.930 INFO  NameNodeUtils - fs.defaultFS is hdfs://127.0.0.1:0
19:49:02.941 INFO  JvmPauseMonitor - Starting JVM pause monitor
19:49:02.941 INFO  DFSUtil - Filter initializers set : org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
19:49:02.941 INFO  DFSUtil - Starting Web-server for hdfs at: http://localhost:0
19:49:02.942 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
19:49:02.943 WARN  HttpRequestLog - Jetty request log can only be enabled using Log4j
19:49:02.943 INFO  HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
19:49:02.944 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
19:49:02.944 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
19:49:02.945 INFO  HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context hdfs
19:49:02.945 INFO  HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context static
19:49:02.945 INFO  HttpServer2 - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
19:49:02.946 INFO  HttpServer2 - Jetty bound to port 46803
19:49:02.946 INFO  Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
19:49:02.947 INFO  session - DefaultSessionIdManager workerName=node0
19:49:02.947 INFO  session - No SessionScavenger set, using defaults
19:49:02.947 INFO  session - node0 Scavenging every 600000ms
19:49:02.948 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
19:49:02.948 INFO  ContextHandler - Started o.e.j.s.ServletContextHandler@3930425b{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
19:49:03.035 INFO  ContextHandler - Started o.e.j.w.WebAppContext@48d20304{hdfs,/,file:///tmp/jetty-localhost-46803-hadoop-hdfs-3_3_6-tests_jar-_-any-2036429736712450630/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
19:49:03.035 INFO  AbstractConnector - Started ServerConnector@6671eee9{HTTP/1.1, (http/1.1)}{localhost:46803}
19:49:03.035 INFO  Server - Started @74248ms
19:49:03.036 INFO  FSEditLog - Edit logging is async:true
19:49:03.044 INFO  FSNamesystem - KeyProvider: null
19:49:03.044 INFO  FSNamesystem - fsLock is fair: true
19:49:03.044 INFO  FSNamesystem - Detailed lock hold time metrics enabled: false
19:49:03.044 INFO  FSNamesystem - fsOwner                = runner (auth:SIMPLE)
19:49:03.044 INFO  FSNamesystem - supergroup             = supergroup
19:49:03.044 INFO  FSNamesystem - isPermissionEnabled    = true
19:49:03.044 INFO  FSNamesystem - isStoragePolicyEnabled = true
19:49:03.044 INFO  FSNamesystem - HA Enabled: false
19:49:03.044 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
19:49:03.045 INFO  DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
19:49:03.045 INFO  DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
19:49:03.045 INFO  BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
19:49:03.045 INFO  BlockManager - The block deletion will start around 2025 Jul 15 19:49:03
19:49:03.045 INFO  GSet - Computing capacity for map BlocksMap
19:49:03.045 INFO  GSet - VM type       = 64-bit
19:49:03.045 INFO  GSet - 2.0% max memory 3.4 GB = 70 MB
19:49:03.045 INFO  GSet - capacity      = 2^23 = 8388608 entries
19:49:03.051 INFO  BlockManager - Storage policy satisfier is disabled
19:49:03.051 INFO  BlockManager - dfs.block.access.token.enable = false
19:49:03.051 INFO  BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
19:49:03.051 INFO  BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
19:49:03.051 INFO  BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
19:49:03.051 INFO  BlockManager - defaultReplication         = 1
19:49:03.051 INFO  BlockManager - maxReplication             = 512
19:49:03.051 INFO  BlockManager - minReplication             = 1
19:49:03.051 INFO  BlockManager - maxReplicationStreams      = 2
19:49:03.051 INFO  BlockManager - redundancyRecheckInterval  = 3000ms
19:49:03.051 INFO  BlockManager - encryptDataTransfer        = false
19:49:03.051 INFO  BlockManager - maxNumBlocksToLog          = 1000
19:49:03.051 INFO  GSet - Computing capacity for map INodeMap
19:49:03.051 INFO  GSet - VM type       = 64-bit
19:49:03.051 INFO  GSet - 1.0% max memory 3.4 GB = 35 MB
19:49:03.051 INFO  GSet - capacity      = 2^22 = 4194304 entries
19:49:03.055 INFO  FSDirectory - ACLs enabled? true
19:49:03.055 INFO  FSDirectory - POSIX ACL inheritance enabled? true
19:49:03.055 INFO  FSDirectory - XAttrs enabled? true
19:49:03.055 INFO  NameNode - Caching file names occurring more than 10 times
19:49:03.055 INFO  SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
19:49:03.055 INFO  SnapshotManager - SkipList is disabled
19:49:03.055 INFO  GSet - Computing capacity for map cachedBlocks
19:49:03.055 INFO  GSet - VM type       = 64-bit
19:49:03.056 INFO  GSet - 0.25% max memory 3.4 GB = 8.8 MB
19:49:03.056 INFO  GSet - capacity      = 2^20 = 1048576 entries
19:49:03.056 INFO  TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
19:49:03.056 INFO  TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
19:49:03.056 INFO  TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
19:49:03.056 INFO  FSNamesystem - Retry cache on namenode is enabled
19:49:03.056 INFO  FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
19:49:03.056 INFO  GSet - Computing capacity for map NameNodeRetryCache
19:49:03.056 INFO  GSet - VM type       = 64-bit
19:49:03.056 INFO  GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
19:49:03.056 INFO  GSet - capacity      = 2^17 = 131072 entries
19:49:03.057 INFO  Storage - Lock on /tmp/minicluster_storage12854353944789317805/name-0-1/in_use.lock acquired by nodename 3162@pkrvmq0rgcvqdmg
19:49:03.058 INFO  Storage - Lock on /tmp/minicluster_storage12854353944789317805/name-0-2/in_use.lock acquired by nodename 3162@pkrvmq0rgcvqdmg
19:49:03.059 INFO  FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage12854353944789317805/name-0-1/current
19:49:03.059 INFO  FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage12854353944789317805/name-0-2/current
19:49:03.059 INFO  FSImage - No edit log streams selected.
19:49:03.059 INFO  FSImage - Planning to load image: FSImageFile(file=/tmp/minicluster_storage12854353944789317805/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
19:49:03.060 INFO  FSImageFormatPBINode - Loading 1 INodes.
19:49:03.060 INFO  FSImageFormatPBINode - Successfully loaded 1 inodes
19:49:03.060 INFO  FSImageFormatPBINode - Completed update blocks map and name cache, total waiting duration 0ms.
19:49:03.060 INFO  FSImageFormatProtobuf - Loaded FSImage in 0 seconds.
19:49:03.060 INFO  FSImage - Loaded image for txid 0 from /tmp/minicluster_storage12854353944789317805/name-0-1/current/fsimage_0000000000000000000
19:49:03.060 INFO  FSNamesystem - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
19:49:03.061 INFO  FSEditLog - Starting log segment at 1
19:49:03.064 INFO  NameCache - initialized with 0 entries 0 lookups
19:49:03.064 INFO  FSNamesystem - Finished loading FSImage in 7 msecs
19:49:03.064 INFO  NameNode - RPC server is binding to localhost:0
19:49:03.064 INFO  NameNode - Enable NameNode state context:false
19:49:03.064 INFO  CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
19:49:03.065 INFO  Server - Listener at localhost:44559
19:49:03.065 INFO  Server - Starting Socket Reader #1 for port 0
19:49:03.067 INFO  NameNode - Clients are to use localhost:44559 to access this namenode/service.
19:49:03.067 INFO  FSNamesystem - Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans.
19:49:03.079 INFO  LeaseManager - Number of blocks under construction: 0
19:49:03.079 INFO  DatanodeAdminDefaultMonitor - Initialized the Default Decommission and Maintenance monitor
19:49:03.080 INFO  BlockManager - Start MarkedDeleteBlockScrubber thread
19:49:03.080 INFO  BlockManager - initializing replication queues
19:49:03.081 INFO  StateChange - STATE* Leaving safe mode after 0 secs
19:49:03.081 INFO  StateChange - STATE* Network topology has 0 racks and 0 datanodes
19:49:03.081 INFO  StateChange - STATE* UnderReplicatedBlocks has 0 blocks
19:49:03.083 INFO  Server - IPC Server Responder: starting
19:49:03.083 INFO  Server - IPC Server listener on 0: starting
19:49:03.084 INFO  NameNode - NameNode RPC up at: localhost/127.0.0.1:44559
19:49:03.084 WARN  MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
19:49:03.084 INFO  FSNamesystem - Starting services required for active state
19:49:03.085 INFO  FSDirectory - Initializing quota with 12 thread(s)
19:49:03.085 INFO  FSDirectory - Quota initialization completed in 0 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
19:49:03.086 INFO  BlockManager - Total number of blocks            = 0
19:49:03.086 INFO  BlockManager - Number of invalid blocks          = 0
19:49:03.086 INFO  BlockManager - Number of under-replicated blocks = 0
19:49:03.086 INFO  BlockManager - Number of  over-replicated blocks = 0
19:49:03.086 INFO  BlockManager - Number of blocks being written    = 0
19:49:03.086 INFO  StateChange - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 6 msec
19:49:03.086 INFO  CacheReplicationMonitor - Starting CacheReplicationMonitor with interval 30000 milliseconds
19:49:03.086 INFO  MiniDFSCluster - Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/tmp/minicluster_storage12854353944789317805/data/data1,[DISK]file:/tmp/minicluster_storage12854353944789317805/data/data2
19:49:03.087 INFO  ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage12854353944789317805/data/data1
19:49:03.087 INFO  ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage12854353944789317805/data/data2
19:49:03.097 INFO  MetricsSystemImpl - DataNode metrics system started (again)
19:49:03.097 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
19:49:03.097 INFO  BlockScanner - Initialized block scanner with targetBytesPerSec 1048576
19:49:03.097 INFO  DataNode - Configured hostname is 127.0.0.1
19:49:03.097 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
19:49:03.097 INFO  DataNode - Starting DataNode with maxLockedMemory = 0
19:49:03.098 INFO  DataNode - Opened streaming server at /127.0.0.1:42349
19:49:03.098 INFO  DataNode - Balancing bandwidth is 104857600 bytes/s
19:49:03.098 INFO  DataNode - Number threads for balancing is 100
19:49:03.099 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
19:49:03.100 WARN  HttpRequestLog - Jetty request log can only be enabled using Log4j
19:49:03.101 INFO  HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
19:49:03.101 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
19:49:03.101 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
19:49:03.102 INFO  HttpServer2 - Jetty bound to port 42685
19:49:03.102 INFO  Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
19:49:03.103 INFO  session - DefaultSessionIdManager workerName=node0
19:49:03.103 INFO  session - No SessionScavenger set, using defaults
19:49:03.103 INFO  session - node0 Scavenging every 600000ms
19:49:03.103 INFO  ContextHandler - Started o.e.j.s.ServletContextHandler@4b028e46{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
19:49:03.224 INFO  ContextHandler - Started o.e.j.w.WebAppContext@22fb53fb{datanode,/,file:///tmp/jetty-localhost-42685-hadoop-hdfs-3_3_6-tests_jar-_-any-6472223686873959946/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
19:49:03.224 INFO  AbstractConnector - Started ServerConnector@11185945{HTTP/1.1, (http/1.1)}{localhost:42685}
19:49:03.224 INFO  Server - Started @74437ms
19:49:03.225 WARN  DatanodeHttpServer - Got null for restCsrfPreventionFilter - will not do any filtering.
19:49:03.226 INFO  DatanodeHttpServer - Listening HTTP traffic on /127.0.0.1:36067
19:49:03.227 INFO  JvmPauseMonitor - Starting JVM pause monitor
19:49:03.227 INFO  DataNode - dnUserName = runner
19:49:03.227 INFO  DataNode - supergroup = supergroup
19:49:03.227 INFO  CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
19:49:03.227 INFO  Server - Listener at localhost:39037
19:49:03.228 INFO  Server - Starting Socket Reader #1 for port 0
19:49:03.230 INFO  DataNode - Opened IPC server at /127.0.0.1:39037
19:49:03.244 INFO  DataNode - Refresh request received for nameservices: null
19:49:03.244 INFO  DataNode - Starting BPOfferServices for nameservices: <default>
19:49:03.244 WARN  MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
19:49:03.245 INFO  DataNode - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:44559 starting to offer service
19:49:03.247 INFO  Server - IPC Server Responder: starting
19:49:03.247 INFO  Server - IPC Server listener on 0: starting
19:49:03.248 INFO  DataNode - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:44559
19:49:03.248 INFO  Storage - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2)
19:49:03.252 INFO  Storage - Lock on /tmp/minicluster_storage12854353944789317805/data/data1/in_use.lock acquired by nodename 3162@pkrvmq0rgcvqdmg
19:49:03.252 INFO  Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage12854353944789317805/data/data1 is not formatted for namespace 271201066. Formatting...
19:49:03.252 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=datanodeReport	src=null	dst=null	perm=null	proto=rpc
19:49:03.253 INFO  Storage - Generated new storageID DS-c90e07b2-9d6c-49fc-b82f-8eec9e8a9c90 for directory /tmp/minicluster_storage12854353944789317805/data/data1 
19:49:03.253 INFO  MiniDFSCluster - dnInfo.length != numDataNodes
19:49:03.253 INFO  MiniDFSCluster - Waiting for cluster to become active
19:49:03.254 INFO  Storage - Lock on /tmp/minicluster_storage12854353944789317805/data/data2/in_use.lock acquired by nodename 3162@pkrvmq0rgcvqdmg
19:49:03.254 INFO  Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage12854353944789317805/data/data2 is not formatted for namespace 271201066. Formatting...
19:49:03.255 INFO  Storage - Generated new storageID DS-1434996b-eb8c-4df4-8b1f-cb2643a52bd1 for directory /tmp/minicluster_storage12854353944789317805/data/data2 
19:49:03.275 INFO  Storage - Analyzing storage directories for bpid BP-1692280096-10.1.0.127-1752608942905
19:49:03.275 INFO  Storage - Locking is disabled for /tmp/minicluster_storage12854353944789317805/data/data1/current/BP-1692280096-10.1.0.127-1752608942905
19:49:03.275 INFO  Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage12854353944789317805/data/data1 and block pool id BP-1692280096-10.1.0.127-1752608942905 is not formatted. Formatting ...
19:49:03.275 INFO  Storage - Formatting block pool BP-1692280096-10.1.0.127-1752608942905 directory /tmp/minicluster_storage12854353944789317805/data/data1/current/BP-1692280096-10.1.0.127-1752608942905/current
19:49:03.293 INFO  Storage - Analyzing storage directories for bpid BP-1692280096-10.1.0.127-1752608942905
19:49:03.293 INFO  Storage - Locking is disabled for /tmp/minicluster_storage12854353944789317805/data/data2/current/BP-1692280096-10.1.0.127-1752608942905
19:49:03.293 INFO  Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage12854353944789317805/data/data2 and block pool id BP-1692280096-10.1.0.127-1752608942905 is not formatted. Formatting ...
19:49:03.293 INFO  Storage - Formatting block pool BP-1692280096-10.1.0.127-1752608942905 directory /tmp/minicluster_storage12854353944789317805/data/data2/current/BP-1692280096-10.1.0.127-1752608942905/current
19:49:03.294 INFO  DataNode - Setting up storage: nsid=271201066;bpid=BP-1692280096-10.1.0.127-1752608942905;lv=-57;nsInfo=lv=-66;cid=testClusterID;nsid=271201066;c=1752608942905;bpid=BP-1692280096-10.1.0.127-1752608942905;dnuuid=null
19:49:03.295 INFO  DataNode - Generated and persisted new Datanode UUID 7f11aede-15d3-490f-ae98-617b6f5136b8
19:49:03.295 INFO  FsDatasetImpl - The datanode lock is a read write lock
19:49:03.296 INFO  FsDatasetImpl - Added new volume: DS-c90e07b2-9d6c-49fc-b82f-8eec9e8a9c90
19:49:03.296 INFO  FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage12854353944789317805/data/data1, StorageType: DISK
19:49:03.296 INFO  FsDatasetImpl - Added new volume: DS-1434996b-eb8c-4df4-8b1f-cb2643a52bd1
19:49:03.296 INFO  FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage12854353944789317805/data/data2, StorageType: DISK
19:49:03.296 INFO  MemoryMappableBlockLoader - Initializing cache loader: MemoryMappableBlockLoader.
19:49:03.297 INFO  FsDatasetImpl - Registered FSDatasetState MBean
19:49:03.297 INFO  FsDatasetImpl - Adding block pool BP-1692280096-10.1.0.127-1752608942905
19:49:03.297 INFO  FsDatasetImpl - Scanning block pool BP-1692280096-10.1.0.127-1752608942905 on volume /tmp/minicluster_storage12854353944789317805/data/data1...
19:49:03.298 INFO  FsDatasetImpl - Scanning block pool BP-1692280096-10.1.0.127-1752608942905 on volume /tmp/minicluster_storage12854353944789317805/data/data2...
19:49:03.298 WARN  FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage12854353944789317805/data/data1/current/BP-1692280096-10.1.0.127-1752608942905/current, will proceed with Du for space computation calculation, 
19:49:03.298 WARN  FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage12854353944789317805/data/data2/current/BP-1692280096-10.1.0.127-1752608942905/current, will proceed with Du for space computation calculation, 
19:49:03.310 INFO  FsDatasetImpl - Time taken to scan block pool BP-1692280096-10.1.0.127-1752608942905 on /tmp/minicluster_storage12854353944789317805/data/data2: 12ms
19:49:03.310 INFO  FsDatasetImpl - Time taken to scan block pool BP-1692280096-10.1.0.127-1752608942905 on /tmp/minicluster_storage12854353944789317805/data/data1: 13ms
19:49:03.310 INFO  FsDatasetImpl - Total time to scan all replicas for block pool BP-1692280096-10.1.0.127-1752608942905: 13ms
19:49:03.310 INFO  FsDatasetImpl - Adding replicas to map for block pool BP-1692280096-10.1.0.127-1752608942905 on volume /tmp/minicluster_storage12854353944789317805/data/data1...
19:49:03.310 INFO  BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage12854353944789317805/data/data1/current/BP-1692280096-10.1.0.127-1752608942905/current/replicas doesn't exist 
19:49:03.310 INFO  FsDatasetImpl - Adding replicas to map for block pool BP-1692280096-10.1.0.127-1752608942905 on volume /tmp/minicluster_storage12854353944789317805/data/data2...
19:49:03.310 INFO  BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage12854353944789317805/data/data2/current/BP-1692280096-10.1.0.127-1752608942905/current/replicas doesn't exist 
19:49:03.310 INFO  FsDatasetImpl - Time to add replicas to map for block pool BP-1692280096-10.1.0.127-1752608942905 on volume /tmp/minicluster_storage12854353944789317805/data/data2: 0ms
19:49:03.310 INFO  FsDatasetImpl - Time to add replicas to map for block pool BP-1692280096-10.1.0.127-1752608942905 on volume /tmp/minicluster_storage12854353944789317805/data/data1: 0ms
19:49:03.311 INFO  FsDatasetImpl - Total time to add all replicas to map for block pool BP-1692280096-10.1.0.127-1752608942905: 1ms
19:49:03.311 INFO  ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage12854353944789317805/data/data1
19:49:03.311 INFO  DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage12854353944789317805/data/data1
19:49:03.311 INFO  ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage12854353944789317805/data/data2
19:49:03.311 INFO  DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage12854353944789317805/data/data2
19:49:03.311 INFO  VolumeScanner - Now scanning bpid BP-1692280096-10.1.0.127-1752608942905 on volume /tmp/minicluster_storage12854353944789317805/data/data2
19:49:03.311 WARN  DirectoryScanner - dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value above 1000 ms/sec. Assuming default value of -1
19:49:03.311 INFO  VolumeScanner - Now scanning bpid BP-1692280096-10.1.0.127-1752608942905 on volume /tmp/minicluster_storage12854353944789317805/data/data1
19:49:03.311 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage12854353944789317805/data/data2, DS-1434996b-eb8c-4df4-8b1f-cb2643a52bd1): finished scanning block pool BP-1692280096-10.1.0.127-1752608942905
19:49:03.311 INFO  DirectoryScanner - Periodic Directory Tree Verification scan starting in 6775052ms with interval of 21600000ms and throttle limit of -1ms/s
19:49:03.311 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage12854353944789317805/data/data1, DS-c90e07b2-9d6c-49fc-b82f-8eec9e8a9c90): finished scanning block pool BP-1692280096-10.1.0.127-1752608942905
19:49:03.312 INFO  DataNode - Block pool BP-1692280096-10.1.0.127-1752608942905 (Datanode Uuid 7f11aede-15d3-490f-ae98-617b6f5136b8) service to localhost/127.0.0.1:44559 beginning handshake with NN
19:49:03.312 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage12854353944789317805/data/data2, DS-1434996b-eb8c-4df4-8b1f-cb2643a52bd1): no suitable block pools found to scan.  Waiting 1814399999 ms.
19:49:03.312 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage12854353944789317805/data/data1, DS-c90e07b2-9d6c-49fc-b82f-8eec9e8a9c90): no suitable block pools found to scan.  Waiting 1814399999 ms.
19:49:03.312 INFO  StateChange - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:42349, datanodeUuid=7f11aede-15d3-490f-ae98-617b6f5136b8, infoPort=36067, infoSecurePort=0, ipcPort=39037, storageInfo=lv=-57;cid=testClusterID;nsid=271201066;c=1752608942905) storage 7f11aede-15d3-490f-ae98-617b6f5136b8
19:49:03.312 INFO  NetworkTopology - Adding a new node: /default-rack/127.0.0.1:42349
19:49:03.312 INFO  BlockReportLeaseManager - Registered DN 7f11aede-15d3-490f-ae98-617b6f5136b8 (127.0.0.1:42349).
19:49:03.313 INFO  DataNode - Block pool BP-1692280096-10.1.0.127-1752608942905 (Datanode Uuid 7f11aede-15d3-490f-ae98-617b6f5136b8) service to localhost/127.0.0.1:44559 successfully registered with NN
19:49:03.313 INFO  DataNode - For namenode localhost/127.0.0.1:44559 using BLOCKREPORT_INTERVAL of 21600000msecs CACHEREPORT_INTERVAL of 10000msecs Initial delay: 0msecs; heartBeatInterval=3000
19:49:03.313 INFO  DataNode - Starting IBR Task Handler.
19:49:03.314 INFO  DatanodeDescriptor - Adding new storage ID DS-c90e07b2-9d6c-49fc-b82f-8eec9e8a9c90 for DN 127.0.0.1:42349
19:49:03.314 INFO  DatanodeDescriptor - Adding new storage ID DS-1434996b-eb8c-4df4-8b1f-cb2643a52bd1 for DN 127.0.0.1:42349
19:49:03.314 INFO  DataNode - After receiving heartbeat response, updating state of namenode localhost:44559 to active
19:49:03.315 INFO  BlockStateChange - BLOCK* processReport 0x71576b1cd33c4d73 with lease ID 0xe12abaee028f51b0: Processing first storage report for DS-1434996b-eb8c-4df4-8b1f-cb2643a52bd1 from datanode DatanodeRegistration(127.0.0.1:42349, datanodeUuid=7f11aede-15d3-490f-ae98-617b6f5136b8, infoPort=36067, infoSecurePort=0, ipcPort=39037, storageInfo=lv=-57;cid=testClusterID;nsid=271201066;c=1752608942905)
19:49:03.315 INFO  BlockStateChange - BLOCK* processReport 0x71576b1cd33c4d73 with lease ID 0xe12abaee028f51b0: from storage DS-1434996b-eb8c-4df4-8b1f-cb2643a52bd1 node DatanodeRegistration(127.0.0.1:42349, datanodeUuid=7f11aede-15d3-490f-ae98-617b6f5136b8, infoPort=36067, infoSecurePort=0, ipcPort=39037, storageInfo=lv=-57;cid=testClusterID;nsid=271201066;c=1752608942905), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0
19:49:03.315 INFO  BlockStateChange - BLOCK* processReport 0x71576b1cd33c4d73 with lease ID 0xe12abaee028f51b0: Processing first storage report for DS-c90e07b2-9d6c-49fc-b82f-8eec9e8a9c90 from datanode DatanodeRegistration(127.0.0.1:42349, datanodeUuid=7f11aede-15d3-490f-ae98-617b6f5136b8, infoPort=36067, infoSecurePort=0, ipcPort=39037, storageInfo=lv=-57;cid=testClusterID;nsid=271201066;c=1752608942905)
19:49:03.315 INFO  BlockStateChange - BLOCK* processReport 0x71576b1cd33c4d73 with lease ID 0xe12abaee028f51b0: from storage DS-c90e07b2-9d6c-49fc-b82f-8eec9e8a9c90 node DatanodeRegistration(127.0.0.1:42349, datanodeUuid=7f11aede-15d3-490f-ae98-617b6f5136b8, infoPort=36067, infoSecurePort=0, ipcPort=39037, storageInfo=lv=-57;cid=testClusterID;nsid=271201066;c=1752608942905), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
19:49:03.315 INFO  DataNode - Successfully sent block report 0x71576b1cd33c4d73 with lease ID 0xe12abaee028f51b0 to namenode: localhost/127.0.0.1:44559,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 0 msecs to generate and 1 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
19:49:03.315 INFO  DataNode - Got finalize command for block pool BP-1692280096-10.1.0.127-1752608942905
19:49:03.354 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=datanodeReport	src=null	dst=null	perm=null	proto=rpc
19:49:03.354 INFO  MiniDFSCluster - Cluster is active
19:49:03.356 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.358 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/user/runner/hdfs.cram	dst=null	perm=runner:supergroup:rw-r--r--	proto=rpc
19:49:03.359 INFO  StateChange - BLOCK* allocate blk_1073741825_1001, replicas=127.0.0.1:42349 for /user/runner/hdfs.cram
19:49:03.360 INFO  DataNode - Receiving BP-1692280096-10.1.0.127-1752608942905:blk_1073741825_1001 src: /127.0.0.1:58898 dest: /127.0.0.1:42349
19:49:03.362 INFO  clienttrace - src: /127.0.0.1:58898, dest: /127.0.0.1:42349, bytes: 50619, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1966585327_1, offset: 0, srvID: 7f11aede-15d3-490f-ae98-617b6f5136b8, blockid: BP-1692280096-10.1.0.127-1752608942905:blk_1073741825_1001, duration(ns): 568728
19:49:03.362 INFO  DataNode - PacketResponder: BP-1692280096-10.1.0.127-1752608942905:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating
19:49:03.363 INFO  StateChange - DIR* completeFile: /user/runner/hdfs.cram is closed by DFSClient_NONMAPREDUCE_-1966585327_1
19:49:03.363 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
19:49:03.364 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/user/runner/hdfs.fasta	dst=null	perm=runner:supergroup:rw-r--r--	proto=rpc
19:49:03.365 INFO  StateChange - BLOCK* allocate blk_1073741826_1002, replicas=127.0.0.1:42349 for /user/runner/hdfs.fasta
19:49:03.366 INFO  DataNode - Receiving BP-1692280096-10.1.0.127-1752608942905:blk_1073741826_1002 src: /127.0.0.1:58910 dest: /127.0.0.1:42349
19:49:03.370 INFO  clienttrace - src: /127.0.0.1:58910, dest: /127.0.0.1:42349, bytes: 1016671, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1966585327_1, offset: 0, srvID: 7f11aede-15d3-490f-ae98-617b6f5136b8, blockid: BP-1692280096-10.1.0.127-1752608942905:blk_1073741826_1002, duration(ns): 3008709
19:49:03.370 INFO  DataNode - PacketResponder: BP-1692280096-10.1.0.127-1752608942905:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating
19:49:03.370 INFO  FSNamesystem - BLOCK* blk_1073741826_1002 is COMMITTED but not COMPLETE(numNodes= 0 <  minimum = 1) in file /user/runner/hdfs.fasta
19:49:03.771 INFO  StateChange - DIR* completeFile: /user/runner/hdfs.fasta is closed by DFSClient_NONMAPREDUCE_-1966585327_1
19:49:03.772 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
19:49:03.772 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/user/runner/hdfs.fasta.fai	dst=null	perm=runner:supergroup:rw-r--r--	proto=rpc
19:49:03.773 INFO  StateChange - BLOCK* allocate blk_1073741827_1003, replicas=127.0.0.1:42349 for /user/runner/hdfs.fasta.fai
19:49:03.774 INFO  DataNode - Receiving BP-1692280096-10.1.0.127-1752608942905:blk_1073741827_1003 src: /127.0.0.1:58918 dest: /127.0.0.1:42349
19:49:03.774 INFO  clienttrace - src: /127.0.0.1:58918, dest: /127.0.0.1:42349, bytes: 19, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1966585327_1, offset: 0, srvID: 7f11aede-15d3-490f-ae98-617b6f5136b8, blockid: BP-1692280096-10.1.0.127-1752608942905:blk_1073741827_1003, duration(ns): 342311
19:49:03.775 INFO  DataNode - PacketResponder: BP-1692280096-10.1.0.127-1752608942905:blk_1073741827_1003, type=LAST_IN_PIPELINE terminating
19:49:03.775 INFO  StateChange - DIR* completeFile: /user/runner/hdfs.fasta.fai is closed by DFSClient_NONMAPREDUCE_-1966585327_1
19:49:03.776 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
19:49:03.776 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.776 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.777 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.777 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.778 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram.crai	dst=null	perm=null	proto=rpc
19:49:03.778 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.crai	dst=null	perm=null	proto=rpc
19:49:03.778 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
19:49:03.779 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
19:49:03.779 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
19:49:03.779 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
19:49:03.780 WARN  DFSUtil - Unexpected value for data transfer bytes=23 duration=0
19:49:03.782 WARN  DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
19:49:03.782 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.782 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.783 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram.crai	dst=null	perm=null	proto=rpc
19:49:03.783 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.crai	dst=null	perm=null	proto=rpc
19:49:03.784 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.784 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.785 WARN  DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
19:49:03.785 INFO  MemoryStore - Block broadcast_587 stored as values in memory (estimated size 472.0 B, free 1920.0 MiB)
19:49:03.786 INFO  MemoryStore - Block broadcast_587_piece0 stored as bytes in memory (estimated size 129.0 B, free 1920.0 MiB)
19:49:03.786 INFO  BlockManagerInfo - Added broadcast_587_piece0 in memory on localhost:36125 (size: 129.0 B, free: 1920.0 MiB)
19:49:03.786 INFO  SparkContext - Created broadcast 587 from broadcast at CramSource.java:114
19:49:03.787 INFO  MemoryStore - Block broadcast_588 stored as values in memory (estimated size 297.9 KiB, free 1919.7 MiB)
19:49:03.799 INFO  MemoryStore - Block broadcast_588_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
19:49:03.799 INFO  BlockManagerInfo - Added broadcast_588_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1920.0 MiB)
19:49:03.799 INFO  SparkContext - Created broadcast 588 from newAPIHadoopFile at PathSplitSource.java:96
19:49:03.822 INFO  MemoryStore - Block broadcast_589 stored as values in memory (estimated size 536.0 B, free 1919.7 MiB)
19:49:03.822 INFO  MemoryStore - Block broadcast_589_piece0 stored as bytes in memory (estimated size 187.0 B, free 1919.7 MiB)
19:49:03.822 INFO  BlockManagerInfo - Added broadcast_589_piece0 in memory on localhost:36125 (size: 187.0 B, free: 1920.0 MiB)
19:49:03.823 INFO  SparkContext - Created broadcast 589 from broadcast at CramSource.java:114
19:49:03.823 INFO  MemoryStore - Block broadcast_590 stored as values in memory (estimated size 297.9 KiB, free 1919.4 MiB)
19:49:03.829 INFO  MemoryStore - Block broadcast_590_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.3 MiB)
19:49:03.830 INFO  BlockManagerInfo - Added broadcast_590_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.9 MiB)
19:49:03.830 INFO  SparkContext - Created broadcast 590 from newAPIHadoopFile at PathSplitSource.java:96
19:49:03.843 INFO  FileInputFormat - Total input files to process : 1
19:49:03.868 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:244
19:49:03.868 INFO  DAGScheduler - Got job 221 (collect at ReadsSparkSourceUnitTest.java:244) with 1 output partitions
19:49:03.868 INFO  DAGScheduler - Final stage: ResultStage 285 (collect at ReadsSparkSourceUnitTest.java:244)
19:49:03.868 INFO  DAGScheduler - Parents of final stage: List()
19:49:03.868 INFO  DAGScheduler - Missing parents: List()
19:49:03.869 INFO  DAGScheduler - Submitting ResultStage 285 (MapPartitionsRDD[1405] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:03.880 INFO  MemoryStore - Block broadcast_591 stored as values in memory (estimated size 286.8 KiB, free 1919.0 MiB)
19:49:03.881 INFO  MemoryStore - Block broadcast_591_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.9 MiB)
19:49:03.881 INFO  BlockManagerInfo - Added broadcast_591_piece0 in memory on localhost:36125 (size: 103.6 KiB, free: 1919.8 MiB)
19:49:03.881 INFO  SparkContext - Created broadcast 591 from broadcast at DAGScheduler.scala:1580
19:49:03.881 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 285 (MapPartitionsRDD[1405] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:03.881 INFO  TaskSchedulerImpl - Adding task set 285.0 with 1 tasks resource profile 0
19:49:03.882 INFO  TaskSetManager - Starting task 0.0 in stage 285.0 (TID 372) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes) 
19:49:03.882 INFO  Executor - Running task 0.0 in stage 285.0 (TID 372)
19:49:03.909 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
19:49:03.917 INFO  Executor - Finished task 0.0 in stage 285.0 (TID 372). 154101 bytes result sent to driver
19:49:03.918 INFO  TaskSetManager - Finished task 0.0 in stage 285.0 (TID 372) in 37 ms on localhost (executor driver) (1/1)
19:49:03.918 INFO  TaskSchedulerImpl - Removed TaskSet 285.0, whose tasks have all completed, from pool 
19:49:03.918 INFO  DAGScheduler - ResultStage 285 (collect at ReadsSparkSourceUnitTest.java:244) finished in 0.049 s
19:49:03.918 INFO  DAGScheduler - Job 221 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:03.918 INFO  TaskSchedulerImpl - Killing all running tasks in stage 285: Stage finished
19:49:03.918 INFO  DAGScheduler - Job 221 finished: collect at ReadsSparkSourceUnitTest.java:244, took 0.050196 s
19:49:03.919 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
19:49:03.919 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.920 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.920 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.921 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.921 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram.crai	dst=null	perm=null	proto=rpc
19:49:03.922 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.crai	dst=null	perm=null	proto=rpc
19:49:03.922 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
19:49:03.922 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
19:49:03.923 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
19:49:03.923 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
19:49:03.924 WARN  DFSUtil - Unexpected value for data transfer bytes=23 duration=0
19:49:03.926 WARN  DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
19:49:03.926 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.927 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.927 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram.crai	dst=null	perm=null	proto=rpc
19:49:03.927 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.crai	dst=null	perm=null	proto=rpc
19:49:03.928 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.928 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.929 INFO  MemoryStore - Block broadcast_592 stored as values in memory (estimated size 472.0 B, free 1918.9 MiB)
19:49:03.930 INFO  MemoryStore - Block broadcast_592_piece0 stored as bytes in memory (estimated size 129.0 B, free 1918.9 MiB)
19:49:03.930 INFO  BlockManagerInfo - Added broadcast_592_piece0 in memory on localhost:36125 (size: 129.0 B, free: 1919.8 MiB)
19:49:03.930 INFO  SparkContext - Created broadcast 592 from broadcast at CramSource.java:114
19:49:03.931 INFO  MemoryStore - Block broadcast_593 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
19:49:03.937 INFO  MemoryStore - Block broadcast_593_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
19:49:03.937 INFO  BlockManagerInfo - Added broadcast_593_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.8 MiB)
19:49:03.937 INFO  SparkContext - Created broadcast 593 from newAPIHadoopFile at PathSplitSource.java:96
19:49:03.951 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.951 INFO  FileInputFormat - Total input files to process : 1
19:49:03.952 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:03.977 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:245
19:49:03.977 INFO  DAGScheduler - Got job 222 (collect at ReadsSparkSourceUnitTest.java:245) with 1 output partitions
19:49:03.977 INFO  DAGScheduler - Final stage: ResultStage 286 (collect at ReadsSparkSourceUnitTest.java:245)
19:49:03.977 INFO  DAGScheduler - Parents of final stage: List()
19:49:03.977 INFO  DAGScheduler - Missing parents: List()
19:49:03.977 INFO  DAGScheduler - Submitting ResultStage 286 (MapPartitionsRDD[1411] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:03.989 INFO  MemoryStore - Block broadcast_594 stored as values in memory (estimated size 286.8 KiB, free 1918.3 MiB)
19:49:03.990 INFO  MemoryStore - Block broadcast_594_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.2 MiB)
19:49:03.990 INFO  BlockManagerInfo - Added broadcast_594_piece0 in memory on localhost:36125 (size: 103.6 KiB, free: 1919.7 MiB)
19:49:03.990 INFO  SparkContext - Created broadcast 594 from broadcast at DAGScheduler.scala:1580
19:49:03.991 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 286 (MapPartitionsRDD[1411] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:03.991 INFO  TaskSchedulerImpl - Adding task set 286.0 with 1 tasks resource profile 0
19:49:03.991 INFO  TaskSetManager - Starting task 0.0 in stage 286.0 (TID 373) (localhost, executor driver, partition 0, ANY, 7797 bytes) 
19:49:03.991 INFO  Executor - Running task 0.0 in stage 286.0 (TID 373)
19:49:04.011 INFO  NewHadoopRDD - Input split: hdfs://localhost:44559/user/runner/hdfs.cram:0+50619
19:49:04.012 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:04.012 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
19:49:04.013 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram.crai	dst=null	perm=null	proto=rpc
19:49:04.013 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.crai	dst=null	perm=null	proto=rpc
19:49:04.014 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
19:49:04.014 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
19:49:04.014 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
19:49:04.015 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
19:49:04.017 WARN  DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
19:49:04.024 WARN  DFSUtil - Unexpected value for data transfer bytes=496231 duration=0
19:49:04.025 WARN  DFSUtil - Unexpected value for data transfer bytes=364135 duration=0
19:49:04.026 WARN  DFSUtil - Unexpected value for data transfer bytes=99943 duration=0
19:49:04.030 INFO  Executor - Finished task 0.0 in stage 286.0 (TID 373). 154101 bytes result sent to driver
19:49:04.031 INFO  TaskSetManager - Finished task 0.0 in stage 286.0 (TID 373) in 40 ms on localhost (executor driver) (1/1)
19:49:04.031 INFO  TaskSchedulerImpl - Removed TaskSet 286.0, whose tasks have all completed, from pool 
19:49:04.031 INFO  DAGScheduler - ResultStage 286 (collect at ReadsSparkSourceUnitTest.java:245) finished in 0.054 s
19:49:04.031 INFO  DAGScheduler - Job 222 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:04.031 INFO  TaskSchedulerImpl - Killing all running tasks in stage 286: Stage finished
19:49:04.031 INFO  DAGScheduler - Job 222 finished: collect at ReadsSparkSourceUnitTest.java:245, took 0.054228 s
19:49:04.034 INFO  MiniDFSCluster - Shutting down the Mini HDFS Cluster
19:49:04.035 INFO  MiniDFSCluster - Shutting down DataNode 0
19:49:04.035 INFO  DirectoryScanner - Shutdown has been called
19:49:04.035 INFO  DataNode - Closing all peers.
19:49:04.035 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage12854353944789317805/data/data1, DS-c90e07b2-9d6c-49fc-b82f-8eec9e8a9c90) exiting.
19:49:04.035 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage12854353944789317805/data/data2, DS-1434996b-eb8c-4df4-8b1f-cb2643a52bd1) exiting.
19:49:04.038 INFO  ContextHandler - Stopped o.e.j.w.WebAppContext@22fb53fb{datanode,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
19:49:04.038 INFO  AbstractConnector - Stopped ServerConnector@11185945{HTTP/1.1, (http/1.1)}{localhost:0}
19:49:04.038 INFO  session - node0 Stopped scavenging
19:49:04.038 INFO  ContextHandler - Stopped o.e.j.s.ServletContextHandler@4b028e46{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
19:49:04.039 INFO  DataNode - Waiting up to 30 seconds for transfer threads to complete
19:49:04.040 INFO  Server - Stopping server on 39037
19:49:04.040 INFO  Server - Stopping IPC Server Responder
19:49:04.040 INFO  Server - Stopping IPC Server listener on 0
19:49:04.041 ERROR DataNode - Command processor encountered interrupt and exit.
19:49:04.041 WARN  DataNode - Ending command processor service for: Thread[Command processor,5,main]
19:49:04.041 WARN  IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
19:49:04.041 WARN  IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
19:49:04.041 WARN  DataNode - Ending block pool service for: Block pool BP-1692280096-10.1.0.127-1752608942905 (Datanode Uuid 7f11aede-15d3-490f-ae98-617b6f5136b8) service to localhost/127.0.0.1:44559
19:49:04.041 INFO  DataNode - Removed Block pool BP-1692280096-10.1.0.127-1752608942905 (Datanode Uuid 7f11aede-15d3-490f-ae98-617b6f5136b8)
19:49:04.041 INFO  FsDatasetImpl - Removing block pool BP-1692280096-10.1.0.127-1752608942905
19:49:04.047 INFO  FsDatasetAsyncDiskService - Shutting down all async disk service threads
19:49:04.047 WARN  CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
19:49:04.047 INFO  FsDatasetAsyncDiskService - All async disk service threads have been shut down
19:49:04.047 INFO  RamDiskAsyncLazyPersistService - Shutting down all async lazy persist service threads
19:49:04.047 INFO  RamDiskAsyncLazyPersistService - All async lazy persist service threads have been shut down
19:49:04.047 INFO  DataNode - Shutdown complete.
19:49:04.047 INFO  MiniDFSCluster - Shutting down the namenode
19:49:04.047 INFO  FSNamesystem - Stopping services started for active state
19:49:04.047 WARN  CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
19:49:04.047 INFO  FSEditLog - Ending log segment 1, 18
19:49:04.047 INFO  FSNamesystem - NameNodeEditLogRoller was interrupted, exiting
19:49:04.048 INFO  FSNamesystem - LazyPersistFileScrubber was interrupted, exiting
19:49:04.048 INFO  FSEditLog - Number of transactions: 19 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 5 Number of syncs: 15 SyncTimes(ms): 1 1 
19:49:04.048 INFO  BlockManagerInfo - Removed broadcast_591_piece0 on localhost:36125 in memory (size: 103.6 KiB, free: 1919.8 MiB)
19:49:04.048 INFO  FileJournalManager - Finalizing edits file /tmp/minicluster_storage12854353944789317805/name-0-1/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage12854353944789317805/name-0-1/current/edits_0000000000000000001-0000000000000000019
19:49:04.048 INFO  FileJournalManager - Finalizing edits file /tmp/minicluster_storage12854353944789317805/name-0-2/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage12854353944789317805/name-0-2/current/edits_0000000000000000001-0000000000000000019
19:49:04.049 INFO  FSEditLog - FSEditLogAsync was interrupted, exiting
19:49:04.049 INFO  BlockManagerInfo - Removed broadcast_588_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.8 MiB)
19:49:04.049 INFO  Server - Stopping server on 44559
19:49:04.049 INFO  CacheReplicationMonitor - Shutting down CacheReplicationMonitor
19:49:04.049 INFO  Server - Stopping IPC Server listener on 0
19:49:04.050 INFO  Server - Stopping IPC Server Responder
19:49:04.050 INFO  BlockManagerInfo - Removed broadcast_594_piece0 on localhost:36125 in memory (size: 103.6 KiB, free: 1919.9 MiB)
19:49:04.051 INFO  BlockManager - Stopping MarkedDeleteBlockScrubber.
19:49:04.051 INFO  BlockManager - Stopping RedundancyMonitor.
19:49:04.051 INFO  BlockManagerInfo - Removed broadcast_589_piece0 on localhost:36125 in memory (size: 187.0 B, free: 1919.9 MiB)
19:49:04.052 INFO  BlockManagerInfo - Removed broadcast_593_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1920.0 MiB)
19:49:04.053 INFO  BlockManagerInfo - Removed broadcast_592_piece0 on localhost:36125 in memory (size: 129.0 B, free: 1920.0 MiB)
19:49:04.054 INFO  BlockManagerInfo - Removed broadcast_587_piece0 on localhost:36125 in memory (size: 129.0 B, free: 1920.0 MiB)
19:49:04.054 INFO  BlockManagerInfo - Removed broadcast_590_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1920.0 MiB)
19:49:04.056 INFO  FSNamesystem - Stopping services started for active state
19:49:04.056 INFO  FSNamesystem - Stopping services started for standby state
19:49:04.057 INFO  ContextHandler - Stopped o.e.j.w.WebAppContext@48d20304{hdfs,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
19:49:04.058 INFO  AbstractConnector - Stopped ServerConnector@6671eee9{HTTP/1.1, (http/1.1)}{localhost:0}
19:49:04.058 INFO  session - node0 Stopped scavenging
19:49:04.058 INFO  ContextHandler - Stopped o.e.j.s.ServletContextHandler@3930425b{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
19:49:04.059 INFO  MetricsSystemImpl - Stopping DataNode metrics system...
19:49:04.059 INFO  MetricsSystemImpl - DataNode metrics system stopped.
19:49:04.059 INFO  MetricsSystemImpl - DataNode metrics system shutdown complete.
19:49:04.067 INFO  MemoryStore - Block broadcast_595 stored as values in memory (estimated size 297.9 KiB, free 1919.7 MiB)
19:49:04.073 INFO  MemoryStore - Block broadcast_595_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
19:49:04.073 INFO  BlockManagerInfo - Added broadcast_595_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1920.0 MiB)
19:49:04.073 INFO  SparkContext - Created broadcast 595 from newAPIHadoopFile at PathSplitSource.java:96
19:49:04.098 INFO  BlockManagerInfo - Removed broadcast_595_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1920.0 MiB)
19:49:04.100 INFO  MemoryStore - Block broadcast_596 stored as values in memory (estimated size 298.0 KiB, free 1919.7 MiB)
19:49:04.106 INFO  MemoryStore - Block broadcast_596_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.7 MiB)
19:49:04.106 INFO  BlockManagerInfo - Added broadcast_596_piece0 in memory on localhost:36125 (size: 50.3 KiB, free: 1920.0 MiB)
19:49:04.106 INFO  SparkContext - Created broadcast 596 from newAPIHadoopFile at PathSplitSource.java:96
19:49:04.126 INFO  MemoryStore - Block broadcast_597 stored as values in memory (estimated size 298.0 KiB, free 1919.4 MiB)
19:49:04.133 INFO  MemoryStore - Block broadcast_597_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.3 MiB)
19:49:04.133 INFO  BlockManagerInfo - Added broadcast_597_piece0 in memory on localhost:36125 (size: 50.3 KiB, free: 1919.9 MiB)
19:49:04.133 INFO  SparkContext - Created broadcast 597 from newAPIHadoopFile at PathSplitSource.java:96
19:49:04.155 INFO  MemoryStore - Block broadcast_598 stored as values in memory (estimated size 536.0 B, free 1919.3 MiB)
19:49:04.155 INFO  MemoryStore - Block broadcast_598_piece0 stored as bytes in memory (estimated size 187.0 B, free 1919.3 MiB)
19:49:04.155 INFO  BlockManagerInfo - Added broadcast_598_piece0 in memory on localhost:36125 (size: 187.0 B, free: 1919.9 MiB)
19:49:04.156 INFO  SparkContext - Created broadcast 598 from broadcast at CramSource.java:114
19:49:04.156 INFO  MemoryStore - Block broadcast_599 stored as values in memory (estimated size 297.9 KiB, free 1919.0 MiB)
19:49:04.162 INFO  MemoryStore - Block broadcast_599_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.0 MiB)
19:49:04.163 INFO  BlockManagerInfo - Added broadcast_599_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.9 MiB)
19:49:04.163 INFO  SparkContext - Created broadcast 599 from newAPIHadoopFile at PathSplitSource.java:96
19:49:04.177 INFO  MemoryStore - Block broadcast_600 stored as values in memory (estimated size 608.0 B, free 1919.0 MiB)
19:49:04.178 INFO  MemoryStore - Block broadcast_600_piece0 stored as bytes in memory (estimated size 206.0 B, free 1919.0 MiB)
19:49:04.178 INFO  BlockManagerInfo - Added broadcast_600_piece0 in memory on localhost:36125 (size: 206.0 B, free: 1919.9 MiB)
19:49:04.178 INFO  SparkContext - Created broadcast 600 from broadcast at CramSource.java:114
19:49:04.179 INFO  MemoryStore - Block broadcast_601 stored as values in memory (estimated size 297.9 KiB, free 1918.7 MiB)
19:49:04.185 INFO  MemoryStore - Block broadcast_601_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
19:49:04.185 INFO  BlockManagerInfo - Added broadcast_601_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.8 MiB)
19:49:04.185 INFO  SparkContext - Created broadcast 601 from newAPIHadoopFile at PathSplitSource.java:96
19:49:04.200 INFO  MemoryStore - Block broadcast_602 stored as values in memory (estimated size 298.0 KiB, free 1918.3 MiB)
19:49:04.211 INFO  MemoryStore - Block broadcast_602_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.3 MiB)
19:49:04.211 INFO  BlockManagerInfo - Added broadcast_602_piece0 in memory on localhost:36125 (size: 50.3 KiB, free: 1919.8 MiB)
19:49:04.211 INFO  SparkContext - Created broadcast 602 from newAPIHadoopFile at PathSplitSource.java:96
19:49:04.237 INFO  FileInputFormat - Total input files to process : 1
19:49:04.273 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:163
19:49:04.273 INFO  DAGScheduler - Got job 223 (collect at ReadsSparkSourceUnitTest.java:163) with 1 output partitions
19:49:04.273 INFO  DAGScheduler - Final stage: ResultStage 287 (collect at ReadsSparkSourceUnitTest.java:163)
19:49:04.273 INFO  DAGScheduler - Parents of final stage: List()
19:49:04.273 INFO  DAGScheduler - Missing parents: List()
19:49:04.273 INFO  DAGScheduler - Submitting ResultStage 287 (MapPartitionsRDD[1441] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:04.290 INFO  MemoryStore - Block broadcast_603 stored as values in memory (estimated size 426.2 KiB, free 1917.9 MiB)
19:49:04.291 INFO  MemoryStore - Block broadcast_603_piece0 stored as bytes in memory (estimated size 153.6 KiB, free 1917.7 MiB)
19:49:04.291 INFO  BlockManagerInfo - Added broadcast_603_piece0 in memory on localhost:36125 (size: 153.6 KiB, free: 1919.6 MiB)
19:49:04.291 INFO  SparkContext - Created broadcast 603 from broadcast at DAGScheduler.scala:1580
19:49:04.291 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 287 (MapPartitionsRDD[1441] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:04.291 INFO  TaskSchedulerImpl - Adding task set 287.0 with 1 tasks resource profile 0
19:49:04.292 INFO  TaskSetManager - Starting task 0.0 in stage 287.0 (TID 374) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7883 bytes) 
19:49:04.292 INFO  Executor - Running task 0.0 in stage 287.0 (TID 374)
19:49:04.321 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam:0+221635
19:49:04.331 INFO  Executor - Finished task 0.0 in stage 287.0 (TID 374). 650141 bytes result sent to driver
19:49:04.333 INFO  TaskSetManager - Finished task 0.0 in stage 287.0 (TID 374) in 41 ms on localhost (executor driver) (1/1)
19:49:04.333 INFO  TaskSchedulerImpl - Removed TaskSet 287.0, whose tasks have all completed, from pool 
19:49:04.333 INFO  DAGScheduler - ResultStage 287 (collect at ReadsSparkSourceUnitTest.java:163) finished in 0.060 s
19:49:04.333 INFO  DAGScheduler - Job 223 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:04.333 INFO  TaskSchedulerImpl - Killing all running tasks in stage 287: Stage finished
19:49:04.333 INFO  DAGScheduler - Job 223 finished: collect at ReadsSparkSourceUnitTest.java:163, took 0.060678 s
19:49:04.336 INFO  MemoryStore - Block broadcast_604 stored as values in memory (estimated size 208.0 B, free 1917.7 MiB)
19:49:04.339 INFO  MemoryStore - Block broadcast_604_piece0 stored as bytes in memory (estimated size 222.0 B, free 1917.7 MiB)
19:49:04.339 INFO  BlockManagerInfo - Added broadcast_604_piece0 in memory on localhost:36125 (size: 222.0 B, free: 1919.6 MiB)
19:49:04.339 INFO  SparkContext - Created broadcast 604 from broadcast at AbstractBinarySamSource.java:82
19:49:04.340 INFO  MemoryStore - Block broadcast_605 stored as values in memory (estimated size 297.9 KiB, free 1917.4 MiB)
19:49:04.345 INFO  BlockManagerInfo - Removed broadcast_598_piece0 on localhost:36125 in memory (size: 187.0 B, free: 1919.6 MiB)
19:49:04.345 INFO  BlockManagerInfo - Removed broadcast_597_piece0 on localhost:36125 in memory (size: 50.3 KiB, free: 1919.7 MiB)
19:49:04.346 INFO  BlockManagerInfo - Removed broadcast_602_piece0 on localhost:36125 in memory (size: 50.3 KiB, free: 1919.7 MiB)
19:49:04.346 INFO  BlockManagerInfo - Removed broadcast_596_piece0 on localhost:36125 in memory (size: 50.3 KiB, free: 1919.8 MiB)
19:49:04.346 INFO  BlockManagerInfo - Removed broadcast_603_piece0 on localhost:36125 in memory (size: 153.6 KiB, free: 1919.9 MiB)
19:49:04.347 INFO  BlockManagerInfo - Removed broadcast_600_piece0 on localhost:36125 in memory (size: 206.0 B, free: 1919.9 MiB)
19:49:04.347 INFO  BlockManagerInfo - Removed broadcast_601_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1920.0 MiB)
19:49:04.347 INFO  BlockManagerInfo - Removed broadcast_599_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1920.0 MiB)
19:49:04.350 INFO  MemoryStore - Block broadcast_605_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
19:49:04.350 INFO  BlockManagerInfo - Added broadcast_605_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1920.0 MiB)
19:49:04.350 INFO  SparkContext - Created broadcast 605 from newAPIHadoopFile at PathSplitSource.java:96
19:49:04.371 INFO  FileInputFormat - Total input files to process : 1
19:49:04.374 INFO  SparkContext - Starting job: count at ReadsSparkSourceUnitTest.java:265
19:49:04.375 INFO  DAGScheduler - Got job 224 (count at ReadsSparkSourceUnitTest.java:265) with 1 output partitions
19:49:04.375 INFO  DAGScheduler - Final stage: ResultStage 288 (count at ReadsSparkSourceUnitTest.java:265)
19:49:04.375 INFO  DAGScheduler - Parents of final stage: List()
19:49:04.375 INFO  DAGScheduler - Missing parents: List()
19:49:04.375 INFO  DAGScheduler - Submitting ResultStage 288 (MapPartitionsRDD[1448] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:04.391 INFO  MemoryStore - Block broadcast_606 stored as values in memory (estimated size 426.2 KiB, free 1919.2 MiB)
19:49:04.392 INFO  MemoryStore - Block broadcast_606_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1919.1 MiB)
19:49:04.392 INFO  BlockManagerInfo - Added broadcast_606_piece0 in memory on localhost:36125 (size: 153.7 KiB, free: 1919.8 MiB)
19:49:04.393 INFO  SparkContext - Created broadcast 606 from broadcast at DAGScheduler.scala:1580
19:49:04.393 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 288 (MapPartitionsRDD[1448] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:04.393 INFO  TaskSchedulerImpl - Adding task set 288.0 with 1 tasks resource profile 0
19:49:04.393 INFO  TaskSetManager - Starting task 0.0 in stage 288.0 (TID 375) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes) 
19:49:04.393 INFO  Executor - Running task 0.0 in stage 288.0 (TID 375)
19:49:04.423 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
19:49:04.427 INFO  Executor - Finished task 0.0 in stage 288.0 (TID 375). 989 bytes result sent to driver
19:49:04.427 INFO  TaskSetManager - Finished task 0.0 in stage 288.0 (TID 375) in 34 ms on localhost (executor driver) (1/1)
19:49:04.427 INFO  TaskSchedulerImpl - Removed TaskSet 288.0, whose tasks have all completed, from pool 
19:49:04.427 INFO  DAGScheduler - ResultStage 288 (count at ReadsSparkSourceUnitTest.java:265) finished in 0.052 s
19:49:04.427 INFO  DAGScheduler - Job 224 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:04.427 INFO  TaskSchedulerImpl - Killing all running tasks in stage 288: Stage finished
19:49:04.427 INFO  DAGScheduler - Job 224 finished: count at ReadsSparkSourceUnitTest.java:265, took 0.053042 s
19:49:04.429 INFO  MemoryStore - Block broadcast_607 stored as values in memory (estimated size 160.0 B, free 1919.1 MiB)
19:49:04.430 INFO  MemoryStore - Block broadcast_607_piece0 stored as bytes in memory (estimated size 212.0 B, free 1919.1 MiB)
19:49:04.430 INFO  BlockManagerInfo - Added broadcast_607_piece0 in memory on localhost:36125 (size: 212.0 B, free: 1919.8 MiB)
19:49:04.430 INFO  SparkContext - Created broadcast 607 from broadcast at AbstractBinarySamSource.java:82
19:49:04.431 INFO  MemoryStore - Block broadcast_608 stored as values in memory (estimated size 298.0 KiB, free 1918.8 MiB)
19:49:04.437 INFO  MemoryStore - Block broadcast_608_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.8 MiB)
19:49:04.437 INFO  BlockManagerInfo - Added broadcast_608_piece0 in memory on localhost:36125 (size: 50.3 KiB, free: 1919.8 MiB)
19:49:04.437 INFO  SparkContext - Created broadcast 608 from newAPIHadoopFile at PathSplitSource.java:96
19:49:04.457 INFO  FileInputFormat - Total input files to process : 1
19:49:04.461 INFO  SparkContext - Starting job: count at ReadsSparkSourceUnitTest.java:286
19:49:04.461 INFO  DAGScheduler - Got job 225 (count at ReadsSparkSourceUnitTest.java:286) with 1 output partitions
19:49:04.461 INFO  DAGScheduler - Final stage: ResultStage 289 (count at ReadsSparkSourceUnitTest.java:286)
19:49:04.461 INFO  DAGScheduler - Parents of final stage: List()
19:49:04.461 INFO  DAGScheduler - Missing parents: List()
19:49:04.461 INFO  DAGScheduler - Submitting ResultStage 289 (MapPartitionsRDD[1455] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:04.477 INFO  MemoryStore - Block broadcast_609 stored as values in memory (estimated size 426.2 KiB, free 1918.3 MiB)
19:49:04.479 INFO  MemoryStore - Block broadcast_609_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1918.2 MiB)
19:49:04.479 INFO  BlockManagerInfo - Added broadcast_609_piece0 in memory on localhost:36125 (size: 153.7 KiB, free: 1919.6 MiB)
19:49:04.479 INFO  SparkContext - Created broadcast 609 from broadcast at DAGScheduler.scala:1580
19:49:04.479 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 289 (MapPartitionsRDD[1455] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:04.479 INFO  TaskSchedulerImpl - Adding task set 289.0 with 1 tasks resource profile 0
19:49:04.479 INFO  TaskSetManager - Starting task 0.0 in stage 289.0 (TID 376) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7901 bytes) 
19:49:04.480 INFO  Executor - Running task 0.0 in stage 289.0 (TID 376)
19:49:04.509 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/engine/CEUTrio.HiSeq.WGS.b37.NA12878.snippet_with_unmapped.bam:0+2184
19:49:04.513 INFO  Executor - Finished task 0.0 in stage 289.0 (TID 376). 989 bytes result sent to driver
19:49:04.513 INFO  TaskSetManager - Finished task 0.0 in stage 289.0 (TID 376) in 34 ms on localhost (executor driver) (1/1)
19:49:04.513 INFO  TaskSchedulerImpl - Removed TaskSet 289.0, whose tasks have all completed, from pool 
19:49:04.513 INFO  DAGScheduler - ResultStage 289 (count at ReadsSparkSourceUnitTest.java:286) finished in 0.052 s
19:49:04.513 INFO  DAGScheduler - Job 225 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:04.513 INFO  TaskSchedulerImpl - Killing all running tasks in stage 289: Stage finished
19:49:04.513 INFO  DAGScheduler - Job 225 finished: count at ReadsSparkSourceUnitTest.java:286, took 0.052352 s
19:49:04.516 INFO  MemoryStore - Block broadcast_610 stored as values in memory (estimated size 298.0 KiB, free 1917.9 MiB)
19:49:04.522 INFO  MemoryStore - Block broadcast_610_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.8 MiB)
19:49:04.522 INFO  BlockManagerInfo - Added broadcast_610_piece0 in memory on localhost:36125 (size: 50.3 KiB, free: 1919.6 MiB)
19:49:04.522 INFO  SparkContext - Created broadcast 610 from newAPIHadoopFile at PathSplitSource.java:96
19:49:04.543 INFO  MemoryStore - Block broadcast_611 stored as values in memory (estimated size 298.2 KiB, free 1917.6 MiB)
19:49:04.549 INFO  MemoryStore - Block broadcast_611_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.5 MiB)
19:49:04.549 INFO  BlockManagerInfo - Added broadcast_611_piece0 in memory on localhost:36125 (size: 50.3 KiB, free: 1919.5 MiB)
19:49:04.550 INFO  SparkContext - Created broadcast 611 from newAPIHadoopFile at PathSplitSource.java:96
19:49:04.569 INFO  FileInputFormat - Total input files to process : 1
19:49:04.569 INFO  FileInputFormat - Total input files to process : 1
19:49:04.578 INFO  MiniDFSCluster - starting cluster: numNameNodes=1, numDataNodes=1
19:49:04.579 INFO  NameNode - Formatting using clusterid: testClusterID
19:49:04.579 INFO  FSEditLog - Edit logging is async:true
19:49:04.586 INFO  FSNamesystem - KeyProvider: null
19:49:04.586 INFO  FSNamesystem - fsLock is fair: true
19:49:04.586 INFO  FSNamesystem - Detailed lock hold time metrics enabled: false
19:49:04.586 INFO  FSNamesystem - fsOwner                = runner (auth:SIMPLE)
19:49:04.586 INFO  FSNamesystem - supergroup             = supergroup
19:49:04.586 INFO  FSNamesystem - isPermissionEnabled    = true
19:49:04.586 INFO  FSNamesystem - isStoragePolicyEnabled = true
19:49:04.586 INFO  FSNamesystem - HA Enabled: false
19:49:04.586 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
19:49:04.586 INFO  DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
19:49:04.586 INFO  DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
19:49:04.586 INFO  BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
19:49:04.586 INFO  BlockManager - The block deletion will start around 2025 Jul 15 19:49:04
19:49:04.586 INFO  GSet - Computing capacity for map BlocksMap
19:49:04.586 INFO  GSet - VM type       = 64-bit
19:49:04.586 INFO  GSet - 2.0% max memory 3.4 GB = 70 MB
19:49:04.586 INFO  GSet - capacity      = 2^23 = 8388608 entries
19:49:04.590 INFO  BlockManagerInfo - Removed broadcast_611_piece0 on localhost:36125 in memory (size: 50.3 KiB, free: 1919.6 MiB)
19:49:04.590 INFO  BlockManagerInfo - Removed broadcast_607_piece0 on localhost:36125 in memory (size: 212.0 B, free: 1919.6 MiB)
19:49:04.591 INFO  BlockManagerInfo - Removed broadcast_606_piece0 on localhost:36125 in memory (size: 153.7 KiB, free: 1919.7 MiB)
19:49:04.591 INFO  BlockManagerInfo - Removed broadcast_605_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.8 MiB)
19:49:04.591 INFO  BlockManager - Storage policy satisfier is disabled
19:49:04.591 INFO  BlockManager - dfs.block.access.token.enable = false
19:49:04.591 INFO  BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
19:49:04.591 INFO  BlockManagerInfo - Removed broadcast_609_piece0 on localhost:36125 in memory (size: 153.7 KiB, free: 1919.9 MiB)
19:49:04.591 INFO  BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
19:49:04.591 INFO  BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
19:49:04.591 INFO  BlockManager - defaultReplication         = 1
19:49:04.591 INFO  BlockManager - maxReplication             = 512
19:49:04.591 INFO  BlockManager - minReplication             = 1
19:49:04.591 INFO  BlockManager - maxReplicationStreams      = 2
19:49:04.591 INFO  BlockManager - redundancyRecheckInterval  = 3000ms
19:49:04.591 INFO  BlockManager - encryptDataTransfer        = false
19:49:04.591 INFO  BlockManager - maxNumBlocksToLog          = 1000
19:49:04.592 INFO  GSet - Computing capacity for map INodeMap
19:49:04.592 INFO  GSet - VM type       = 64-bit
19:49:04.592 INFO  GSet - 1.0% max memory 3.4 GB = 35 MB
19:49:04.592 INFO  GSet - capacity      = 2^22 = 4194304 entries
19:49:04.592 INFO  BlockManagerInfo - Removed broadcast_608_piece0 on localhost:36125 in memory (size: 50.3 KiB, free: 1920.0 MiB)
19:49:04.592 INFO  BlockManagerInfo - Removed broadcast_610_piece0 on localhost:36125 in memory (size: 50.3 KiB, free: 1920.0 MiB)
19:49:04.592 INFO  BlockManagerInfo - Removed broadcast_604_piece0 on localhost:36125 in memory (size: 222.0 B, free: 1920.0 MiB)
19:49:04.593 INFO  FSDirectory - ACLs enabled? true
19:49:04.593 INFO  FSDirectory - POSIX ACL inheritance enabled? true
19:49:04.593 INFO  FSDirectory - XAttrs enabled? true
19:49:04.593 INFO  NameNode - Caching file names occurring more than 10 times
19:49:04.593 INFO  SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
19:49:04.593 INFO  SnapshotManager - SkipList is disabled
19:49:04.593 INFO  GSet - Computing capacity for map cachedBlocks
19:49:04.593 INFO  GSet - VM type       = 64-bit
19:49:04.593 INFO  GSet - 0.25% max memory 3.4 GB = 8.8 MB
19:49:04.593 INFO  GSet - capacity      = 2^20 = 1048576 entries
19:49:04.596 INFO  TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
19:49:04.596 INFO  TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
19:49:04.596 INFO  TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
19:49:04.596 INFO  FSNamesystem - Retry cache on namenode is enabled
19:49:04.596 INFO  FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
19:49:04.596 INFO  GSet - Computing capacity for map NameNodeRetryCache
19:49:04.596 INFO  GSet - VM type       = 64-bit
19:49:04.596 INFO  GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
19:49:04.596 INFO  GSet - capacity      = 2^17 = 131072 entries
19:49:04.596 INFO  FSImage - Allocated new BlockPoolId: BP-2121563838-10.1.0.127-1752608944596
19:49:04.598 INFO  Storage - Storage directory /tmp/minicluster_storage5103056808776111610/name-0-1 has been successfully formatted.
19:49:04.599 INFO  Storage - Storage directory /tmp/minicluster_storage5103056808776111610/name-0-2 has been successfully formatted.
19:49:04.610 INFO  FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage5103056808776111610/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression
19:49:04.611 INFO  FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage5103056808776111610/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression
19:49:04.613 INFO  FSImageFormatProtobuf - Image file /tmp/minicluster_storage5103056808776111610/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
19:49:04.615 INFO  FSImageFormatProtobuf - Image file /tmp/minicluster_storage5103056808776111610/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
19:49:04.616 INFO  NNStorageRetentionManager - Going to retain 1 images with txid >= 0
19:49:04.619 INFO  FSNamesystem - Stopping services started for active state
19:49:04.620 INFO  FSNamesystem - Stopping services started for standby state
19:49:04.620 INFO  NameNode - createNameNode []
19:49:04.621 WARN  MetricsConfig - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
19:49:04.622 INFO  MetricsSystemImpl - Scheduled Metric snapshot period at 10 second(s).
19:49:04.622 INFO  MetricsSystemImpl - NameNode metrics system started
19:49:04.623 INFO  NameNodeUtils - fs.defaultFS is hdfs://127.0.0.1:0
19:49:04.635 INFO  JvmPauseMonitor - Starting JVM pause monitor
19:49:04.635 INFO  DFSUtil - Filter initializers set : org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
19:49:04.635 INFO  DFSUtil - Starting Web-server for hdfs at: http://localhost:0
19:49:04.636 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
19:49:04.636 WARN  HttpRequestLog - Jetty request log can only be enabled using Log4j
19:49:04.637 INFO  HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
19:49:04.637 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
19:49:04.637 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
19:49:04.638 INFO  HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context hdfs
19:49:04.638 INFO  HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context static
19:49:04.639 INFO  HttpServer2 - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
19:49:04.639 INFO  HttpServer2 - Jetty bound to port 42035
19:49:04.639 INFO  Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
19:49:04.640 INFO  session - DefaultSessionIdManager workerName=node0
19:49:04.640 INFO  session - No SessionScavenger set, using defaults
19:49:04.640 INFO  session - node0 Scavenging every 660000ms
19:49:04.640 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
19:49:04.641 INFO  ContextHandler - Started o.e.j.s.ServletContextHandler@852121f{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
19:49:04.729 INFO  ContextHandler - Started o.e.j.w.WebAppContext@b202102{hdfs,/,file:///tmp/jetty-localhost-42035-hadoop-hdfs-3_3_6-tests_jar-_-any-9989145116816958378/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
19:49:04.730 INFO  AbstractConnector - Started ServerConnector@799e0c89{HTTP/1.1, (http/1.1)}{localhost:42035}
19:49:04.730 INFO  Server - Started @75943ms
19:49:04.730 INFO  FSEditLog - Edit logging is async:true
19:49:04.739 INFO  FSNamesystem - KeyProvider: null
19:49:04.739 INFO  FSNamesystem - fsLock is fair: true
19:49:04.739 INFO  FSNamesystem - Detailed lock hold time metrics enabled: false
19:49:04.739 INFO  FSNamesystem - fsOwner                = runner (auth:SIMPLE)
19:49:04.739 INFO  FSNamesystem - supergroup             = supergroup
19:49:04.739 INFO  FSNamesystem - isPermissionEnabled    = true
19:49:04.739 INFO  FSNamesystem - isStoragePolicyEnabled = true
19:49:04.739 INFO  FSNamesystem - HA Enabled: false
19:49:04.739 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
19:49:04.739 INFO  DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
19:49:04.739 INFO  DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
19:49:04.739 INFO  BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
19:49:04.739 INFO  BlockManager - The block deletion will start around 2025 Jul 15 19:49:04
19:49:04.739 INFO  GSet - Computing capacity for map BlocksMap
19:49:04.739 INFO  GSet - VM type       = 64-bit
19:49:04.739 INFO  GSet - 2.0% max memory 3.4 GB = 70 MB
19:49:04.739 INFO  GSet - capacity      = 2^23 = 8388608 entries
19:49:04.744 INFO  BlockManager - Storage policy satisfier is disabled
19:49:04.744 INFO  BlockManager - dfs.block.access.token.enable = false
19:49:04.744 INFO  BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
19:49:04.744 INFO  BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
19:49:04.744 INFO  BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
19:49:04.744 INFO  BlockManager - defaultReplication         = 1
19:49:04.744 INFO  BlockManager - maxReplication             = 512
19:49:04.744 INFO  BlockManager - minReplication             = 1
19:49:04.744 INFO  BlockManager - maxReplicationStreams      = 2
19:49:04.744 INFO  BlockManager - redundancyRecheckInterval  = 3000ms
19:49:04.744 INFO  BlockManager - encryptDataTransfer        = false
19:49:04.744 INFO  BlockManager - maxNumBlocksToLog          = 1000
19:49:04.744 INFO  GSet - Computing capacity for map INodeMap
19:49:04.744 INFO  GSet - VM type       = 64-bit
19:49:04.745 INFO  GSet - 1.0% max memory 3.4 GB = 35 MB
19:49:04.745 INFO  GSet - capacity      = 2^22 = 4194304 entries
19:49:04.745 INFO  FSDirectory - ACLs enabled? true
19:49:04.745 INFO  FSDirectory - POSIX ACL inheritance enabled? true
19:49:04.745 INFO  FSDirectory - XAttrs enabled? true
19:49:04.746 INFO  NameNode - Caching file names occurring more than 10 times
19:49:04.746 INFO  SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
19:49:04.746 INFO  SnapshotManager - SkipList is disabled
19:49:04.746 INFO  GSet - Computing capacity for map cachedBlocks
19:49:04.746 INFO  GSet - VM type       = 64-bit
19:49:04.746 INFO  GSet - 0.25% max memory 3.4 GB = 8.8 MB
19:49:04.746 INFO  GSet - capacity      = 2^20 = 1048576 entries
19:49:04.746 INFO  TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
19:49:04.746 INFO  TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
19:49:04.746 INFO  TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
19:49:04.746 INFO  FSNamesystem - Retry cache on namenode is enabled
19:49:04.746 INFO  FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
19:49:04.746 INFO  GSet - Computing capacity for map NameNodeRetryCache
19:49:04.746 INFO  GSet - VM type       = 64-bit
19:49:04.746 INFO  GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
19:49:04.746 INFO  GSet - capacity      = 2^17 = 131072 entries
19:49:04.747 INFO  Storage - Lock on /tmp/minicluster_storage5103056808776111610/name-0-1/in_use.lock acquired by nodename 3162@pkrvmq0rgcvqdmg
19:49:04.748 INFO  Storage - Lock on /tmp/minicluster_storage5103056808776111610/name-0-2/in_use.lock acquired by nodename 3162@pkrvmq0rgcvqdmg
19:49:04.749 INFO  FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage5103056808776111610/name-0-1/current
19:49:04.749 INFO  FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage5103056808776111610/name-0-2/current
19:49:04.749 INFO  FSImage - No edit log streams selected.
19:49:04.749 INFO  FSImage - Planning to load image: FSImageFile(file=/tmp/minicluster_storage5103056808776111610/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
19:49:04.750 INFO  FSImageFormatPBINode - Loading 1 INodes.
19:49:04.750 INFO  FSImageFormatPBINode - Successfully loaded 1 inodes
19:49:04.750 INFO  FSImageFormatPBINode - Completed update blocks map and name cache, total waiting duration 0ms.
19:49:04.750 INFO  FSImageFormatProtobuf - Loaded FSImage in 0 seconds.
19:49:04.750 INFO  FSImage - Loaded image for txid 0 from /tmp/minicluster_storage5103056808776111610/name-0-1/current/fsimage_0000000000000000000
19:49:04.750 INFO  FSNamesystem - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
19:49:04.750 INFO  FSEditLog - Starting log segment at 1
19:49:04.753 INFO  NameCache - initialized with 0 entries 0 lookups
19:49:04.753 INFO  FSNamesystem - Finished loading FSImage in 6 msecs
19:49:04.753 INFO  NameNode - RPC server is binding to localhost:0
19:49:04.753 INFO  NameNode - Enable NameNode state context:false
19:49:04.753 INFO  CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
19:49:04.753 INFO  Server - Listener at localhost:42563
19:49:04.754 INFO  Server - Starting Socket Reader #1 for port 0
19:49:04.755 INFO  NameNode - Clients are to use localhost:42563 to access this namenode/service.
19:49:04.755 INFO  FSNamesystem - Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans.
19:49:04.766 INFO  LeaseManager - Number of blocks under construction: 0
19:49:04.767 INFO  DatanodeAdminDefaultMonitor - Initialized the Default Decommission and Maintenance monitor
19:49:04.767 INFO  BlockManager - Start MarkedDeleteBlockScrubber thread
19:49:04.767 INFO  BlockManager - initializing replication queues
19:49:04.767 INFO  StateChange - STATE* Leaving safe mode after 0 secs
19:49:04.767 INFO  StateChange - STATE* Network topology has 0 racks and 0 datanodes
19:49:04.767 INFO  StateChange - STATE* UnderReplicatedBlocks has 0 blocks
19:49:04.769 INFO  Server - IPC Server Responder: starting
19:49:04.769 INFO  Server - IPC Server listener on 0: starting
19:49:04.770 INFO  NameNode - NameNode RPC up at: localhost/127.0.0.1:42563
19:49:04.770 WARN  MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
19:49:04.771 INFO  FSNamesystem - Starting services required for active state
19:49:04.771 INFO  FSDirectory - Initializing quota with 12 thread(s)
19:49:04.771 INFO  FSDirectory - Quota initialization completed in 0 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
19:49:04.772 INFO  BlockManager - Total number of blocks            = 0
19:49:04.772 INFO  BlockManager - Number of invalid blocks          = 0
19:49:04.772 INFO  BlockManager - Number of under-replicated blocks = 0
19:49:04.772 INFO  BlockManager - Number of  over-replicated blocks = 0
19:49:04.772 INFO  BlockManager - Number of blocks being written    = 0
19:49:04.772 INFO  StateChange - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 5 msec
19:49:04.773 INFO  CacheReplicationMonitor - Starting CacheReplicationMonitor with interval 30000 milliseconds
19:49:04.773 INFO  MiniDFSCluster - Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/tmp/minicluster_storage5103056808776111610/data/data1,[DISK]file:/tmp/minicluster_storage5103056808776111610/data/data2
19:49:04.773 INFO  ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage5103056808776111610/data/data1
19:49:04.773 INFO  ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage5103056808776111610/data/data2
19:49:04.783 INFO  MetricsSystemImpl - DataNode metrics system started (again)
19:49:04.783 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
19:49:04.783 INFO  BlockScanner - Initialized block scanner with targetBytesPerSec 1048576
19:49:04.783 INFO  DataNode - Configured hostname is 127.0.0.1
19:49:04.783 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
19:49:04.783 INFO  DataNode - Starting DataNode with maxLockedMemory = 0
19:49:04.784 INFO  DataNode - Opened streaming server at /127.0.0.1:38267
19:49:04.784 INFO  DataNode - Balancing bandwidth is 104857600 bytes/s
19:49:04.784 INFO  DataNode - Number threads for balancing is 100
19:49:04.785 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
19:49:04.786 WARN  HttpRequestLog - Jetty request log can only be enabled using Log4j
19:49:04.786 INFO  HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
19:49:04.787 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
19:49:04.787 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
19:49:04.787 INFO  HttpServer2 - Jetty bound to port 45125
19:49:04.787 INFO  Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
19:49:04.788 INFO  session - DefaultSessionIdManager workerName=node0
19:49:04.788 INFO  session - No SessionScavenger set, using defaults
19:49:04.788 INFO  session - node0 Scavenging every 660000ms
19:49:04.788 INFO  ContextHandler - Started o.e.j.s.ServletContextHandler@4ff92e0b{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
19:49:04.883 INFO  ContextHandler - Started o.e.j.w.WebAppContext@e65a19b{datanode,/,file:///tmp/jetty-localhost-45125-hadoop-hdfs-3_3_6-tests_jar-_-any-12852612993636503253/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
19:49:04.883 INFO  AbstractConnector - Started ServerConnector@72dff228{HTTP/1.1, (http/1.1)}{localhost:45125}
19:49:04.883 INFO  Server - Started @76096ms
19:49:04.884 WARN  DatanodeHttpServer - Got null for restCsrfPreventionFilter - will not do any filtering.
19:49:04.885 INFO  DatanodeHttpServer - Listening HTTP traffic on /127.0.0.1:34931
19:49:04.885 INFO  DataNode - dnUserName = runner
19:49:04.885 INFO  DataNode - supergroup = supergroup
19:49:04.885 INFO  JvmPauseMonitor - Starting JVM pause monitor
19:49:04.885 INFO  CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
19:49:04.886 INFO  Server - Listener at localhost:33111
19:49:04.886 INFO  Server - Starting Socket Reader #1 for port 0
19:49:04.887 INFO  DataNode - Opened IPC server at /127.0.0.1:33111
19:49:04.901 INFO  DataNode - Refresh request received for nameservices: null
19:49:04.901 INFO  DataNode - Starting BPOfferServices for nameservices: <default>
19:49:04.901 WARN  MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
19:49:04.901 INFO  DataNode - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:42563 starting to offer service
19:49:04.902 INFO  Server - IPC Server Responder: starting
19:49:04.903 INFO  Server - IPC Server listener on 0: starting
19:49:04.903 INFO  DataNode - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:42563
19:49:04.903 INFO  Storage - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2)
19:49:04.905 INFO  Storage - Lock on /tmp/minicluster_storage5103056808776111610/data/data1/in_use.lock acquired by nodename 3162@pkrvmq0rgcvqdmg
19:49:04.905 INFO  Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage5103056808776111610/data/data1 is not formatted for namespace 948453116. Formatting...
19:49:04.905 INFO  Storage - Generated new storageID DS-363b8b20-76f4-40db-849d-3edc69317a05 for directory /tmp/minicluster_storage5103056808776111610/data/data1 
19:49:04.906 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=datanodeReport	src=null	dst=null	perm=null	proto=rpc
19:49:04.906 INFO  MiniDFSCluster - dnInfo.length != numDataNodes
19:49:04.906 INFO  MiniDFSCluster - Waiting for cluster to become active
19:49:04.907 INFO  Storage - Lock on /tmp/minicluster_storage5103056808776111610/data/data2/in_use.lock acquired by nodename 3162@pkrvmq0rgcvqdmg
19:49:04.907 INFO  Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage5103056808776111610/data/data2 is not formatted for namespace 948453116. Formatting...
19:49:04.907 INFO  Storage - Generated new storageID DS-5618019b-0180-465a-b806-96b1f05f917a for directory /tmp/minicluster_storage5103056808776111610/data/data2 
19:49:04.924 INFO  Storage - Analyzing storage directories for bpid BP-2121563838-10.1.0.127-1752608944596
19:49:04.924 INFO  Storage - Locking is disabled for /tmp/minicluster_storage5103056808776111610/data/data1/current/BP-2121563838-10.1.0.127-1752608944596
19:49:04.924 INFO  Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage5103056808776111610/data/data1 and block pool id BP-2121563838-10.1.0.127-1752608944596 is not formatted. Formatting ...
19:49:04.924 INFO  Storage - Formatting block pool BP-2121563838-10.1.0.127-1752608944596 directory /tmp/minicluster_storage5103056808776111610/data/data1/current/BP-2121563838-10.1.0.127-1752608944596/current
19:49:04.939 INFO  Storage - Analyzing storage directories for bpid BP-2121563838-10.1.0.127-1752608944596
19:49:04.939 INFO  Storage - Locking is disabled for /tmp/minicluster_storage5103056808776111610/data/data2/current/BP-2121563838-10.1.0.127-1752608944596
19:49:04.939 INFO  Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage5103056808776111610/data/data2 and block pool id BP-2121563838-10.1.0.127-1752608944596 is not formatted. Formatting ...
19:49:04.939 INFO  Storage - Formatting block pool BP-2121563838-10.1.0.127-1752608944596 directory /tmp/minicluster_storage5103056808776111610/data/data2/current/BP-2121563838-10.1.0.127-1752608944596/current
19:49:04.940 INFO  DataNode - Setting up storage: nsid=948453116;bpid=BP-2121563838-10.1.0.127-1752608944596;lv=-57;nsInfo=lv=-66;cid=testClusterID;nsid=948453116;c=1752608944596;bpid=BP-2121563838-10.1.0.127-1752608944596;dnuuid=null
19:49:04.941 INFO  DataNode - Generated and persisted new Datanode UUID c8a4cdde-161e-4f49-9c45-8b6ed26ed99f
19:49:04.941 INFO  FsDatasetImpl - The datanode lock is a read write lock
19:49:04.942 INFO  FsDatasetImpl - Added new volume: DS-363b8b20-76f4-40db-849d-3edc69317a05
19:49:04.942 INFO  FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage5103056808776111610/data/data1, StorageType: DISK
19:49:04.942 INFO  FsDatasetImpl - Added new volume: DS-5618019b-0180-465a-b806-96b1f05f917a
19:49:04.942 INFO  FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage5103056808776111610/data/data2, StorageType: DISK
19:49:04.942 INFO  MemoryMappableBlockLoader - Initializing cache loader: MemoryMappableBlockLoader.
19:49:04.943 INFO  FsDatasetImpl - Registered FSDatasetState MBean
19:49:04.943 INFO  FsDatasetImpl - Adding block pool BP-2121563838-10.1.0.127-1752608944596
19:49:04.943 INFO  FsDatasetImpl - Scanning block pool BP-2121563838-10.1.0.127-1752608944596 on volume /tmp/minicluster_storage5103056808776111610/data/data1...
19:49:04.943 INFO  FsDatasetImpl - Scanning block pool BP-2121563838-10.1.0.127-1752608944596 on volume /tmp/minicluster_storage5103056808776111610/data/data2...
19:49:04.944 WARN  FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage5103056808776111610/data/data1/current/BP-2121563838-10.1.0.127-1752608944596/current, will proceed with Du for space computation calculation, 
19:49:04.944 WARN  FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage5103056808776111610/data/data2/current/BP-2121563838-10.1.0.127-1752608944596/current, will proceed with Du for space computation calculation, 
19:49:04.955 INFO  FsDatasetImpl - Time taken to scan block pool BP-2121563838-10.1.0.127-1752608944596 on /tmp/minicluster_storage5103056808776111610/data/data2: 11ms
19:49:04.955 INFO  FsDatasetImpl - Time taken to scan block pool BP-2121563838-10.1.0.127-1752608944596 on /tmp/minicluster_storage5103056808776111610/data/data1: 12ms
19:49:04.955 INFO  FsDatasetImpl - Total time to scan all replicas for block pool BP-2121563838-10.1.0.127-1752608944596: 12ms
19:49:04.956 INFO  FsDatasetImpl - Adding replicas to map for block pool BP-2121563838-10.1.0.127-1752608944596 on volume /tmp/minicluster_storage5103056808776111610/data/data1...
19:49:04.956 INFO  BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage5103056808776111610/data/data1/current/BP-2121563838-10.1.0.127-1752608944596/current/replicas doesn't exist 
19:49:04.956 INFO  FsDatasetImpl - Adding replicas to map for block pool BP-2121563838-10.1.0.127-1752608944596 on volume /tmp/minicluster_storage5103056808776111610/data/data2...
19:49:04.956 INFO  FsDatasetImpl - Time to add replicas to map for block pool BP-2121563838-10.1.0.127-1752608944596 on volume /tmp/minicluster_storage5103056808776111610/data/data1: 0ms
19:49:04.956 INFO  BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage5103056808776111610/data/data2/current/BP-2121563838-10.1.0.127-1752608944596/current/replicas doesn't exist 
19:49:04.956 INFO  FsDatasetImpl - Time to add replicas to map for block pool BP-2121563838-10.1.0.127-1752608944596 on volume /tmp/minicluster_storage5103056808776111610/data/data2: 0ms
19:49:04.956 INFO  FsDatasetImpl - Total time to add all replicas to map for block pool BP-2121563838-10.1.0.127-1752608944596: 1ms
19:49:04.956 INFO  ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage5103056808776111610/data/data1
19:49:04.956 INFO  DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage5103056808776111610/data/data1
19:49:04.956 INFO  ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage5103056808776111610/data/data2
19:49:04.956 INFO  DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage5103056808776111610/data/data2
19:49:04.957 INFO  VolumeScanner - Now scanning bpid BP-2121563838-10.1.0.127-1752608944596 on volume /tmp/minicluster_storage5103056808776111610/data/data1
19:49:04.957 WARN  DirectoryScanner - dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value above 1000 ms/sec. Assuming default value of -1
19:49:04.957 INFO  VolumeScanner - Now scanning bpid BP-2121563838-10.1.0.127-1752608944596 on volume /tmp/minicluster_storage5103056808776111610/data/data2
19:49:04.957 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage5103056808776111610/data/data1, DS-363b8b20-76f4-40db-849d-3edc69317a05): finished scanning block pool BP-2121563838-10.1.0.127-1752608944596
19:49:04.957 INFO  DirectoryScanner - Periodic Directory Tree Verification scan starting in 5611209ms with interval of 21600000ms and throttle limit of -1ms/s
19:49:04.957 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage5103056808776111610/data/data2, DS-5618019b-0180-465a-b806-96b1f05f917a): finished scanning block pool BP-2121563838-10.1.0.127-1752608944596
19:49:04.957 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage5103056808776111610/data/data2, DS-5618019b-0180-465a-b806-96b1f05f917a): no suitable block pools found to scan.  Waiting 1814400000 ms.
19:49:04.957 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage5103056808776111610/data/data1, DS-363b8b20-76f4-40db-849d-3edc69317a05): no suitable block pools found to scan.  Waiting 1814400000 ms.
19:49:04.957 INFO  DataNode - Block pool BP-2121563838-10.1.0.127-1752608944596 (Datanode Uuid c8a4cdde-161e-4f49-9c45-8b6ed26ed99f) service to localhost/127.0.0.1:42563 beginning handshake with NN
19:49:04.958 INFO  StateChange - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:38267, datanodeUuid=c8a4cdde-161e-4f49-9c45-8b6ed26ed99f, infoPort=34931, infoSecurePort=0, ipcPort=33111, storageInfo=lv=-57;cid=testClusterID;nsid=948453116;c=1752608944596) storage c8a4cdde-161e-4f49-9c45-8b6ed26ed99f
19:49:04.958 INFO  NetworkTopology - Adding a new node: /default-rack/127.0.0.1:38267
19:49:04.958 INFO  BlockReportLeaseManager - Registered DN c8a4cdde-161e-4f49-9c45-8b6ed26ed99f (127.0.0.1:38267).
19:49:04.958 INFO  DataNode - Block pool BP-2121563838-10.1.0.127-1752608944596 (Datanode Uuid c8a4cdde-161e-4f49-9c45-8b6ed26ed99f) service to localhost/127.0.0.1:42563 successfully registered with NN
19:49:04.958 INFO  DataNode - For namenode localhost/127.0.0.1:42563 using BLOCKREPORT_INTERVAL of 21600000msecs CACHEREPORT_INTERVAL of 10000msecs Initial delay: 0msecs; heartBeatInterval=3000
19:49:04.958 INFO  DataNode - Starting IBR Task Handler.
19:49:04.959 INFO  DatanodeDescriptor - Adding new storage ID DS-363b8b20-76f4-40db-849d-3edc69317a05 for DN 127.0.0.1:38267
19:49:04.959 INFO  DatanodeDescriptor - Adding new storage ID DS-5618019b-0180-465a-b806-96b1f05f917a for DN 127.0.0.1:38267
19:49:04.959 INFO  DataNode - After receiving heartbeat response, updating state of namenode localhost:42563 to active
19:49:04.959 INFO  BlockStateChange - BLOCK* processReport 0x345d121de6e5fe29 with lease ID 0x8bfca8ca4e6fe12: Processing first storage report for DS-5618019b-0180-465a-b806-96b1f05f917a from datanode DatanodeRegistration(127.0.0.1:38267, datanodeUuid=c8a4cdde-161e-4f49-9c45-8b6ed26ed99f, infoPort=34931, infoSecurePort=0, ipcPort=33111, storageInfo=lv=-57;cid=testClusterID;nsid=948453116;c=1752608944596)
19:49:04.959 INFO  BlockStateChange - BLOCK* processReport 0x345d121de6e5fe29 with lease ID 0x8bfca8ca4e6fe12: from storage DS-5618019b-0180-465a-b806-96b1f05f917a node DatanodeRegistration(127.0.0.1:38267, datanodeUuid=c8a4cdde-161e-4f49-9c45-8b6ed26ed99f, infoPort=34931, infoSecurePort=0, ipcPort=33111, storageInfo=lv=-57;cid=testClusterID;nsid=948453116;c=1752608944596), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0
19:49:04.959 INFO  BlockStateChange - BLOCK* processReport 0x345d121de6e5fe29 with lease ID 0x8bfca8ca4e6fe12: Processing first storage report for DS-363b8b20-76f4-40db-849d-3edc69317a05 from datanode DatanodeRegistration(127.0.0.1:38267, datanodeUuid=c8a4cdde-161e-4f49-9c45-8b6ed26ed99f, infoPort=34931, infoSecurePort=0, ipcPort=33111, storageInfo=lv=-57;cid=testClusterID;nsid=948453116;c=1752608944596)
19:49:04.959 INFO  BlockStateChange - BLOCK* processReport 0x345d121de6e5fe29 with lease ID 0x8bfca8ca4e6fe12: from storage DS-363b8b20-76f4-40db-849d-3edc69317a05 node DatanodeRegistration(127.0.0.1:38267, datanodeUuid=c8a4cdde-161e-4f49-9c45-8b6ed26ed99f, infoPort=34931, infoSecurePort=0, ipcPort=33111, storageInfo=lv=-57;cid=testClusterID;nsid=948453116;c=1752608944596), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
19:49:04.960 INFO  DataNode - Successfully sent block report 0x345d121de6e5fe29 with lease ID 0x8bfca8ca4e6fe12 to namenode: localhost/127.0.0.1:42563,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 0 msecs to generate and 1 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
19:49:04.960 INFO  DataNode - Got finalize command for block pool BP-2121563838-10.1.0.127-1752608944596
19:49:05.006 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=datanodeReport	src=null	dst=null	perm=null	proto=rpc
19:49:05.006 INFO  MiniDFSCluster - Cluster is active
19:49:05.008 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
19:49:05.009 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/user/runner/hdfs.bam	dst=null	perm=runner:supergroup:rw-r--r--	proto=rpc
19:49:05.010 INFO  StateChange - BLOCK* allocate blk_1073741825_1001, replicas=127.0.0.1:38267 for /user/runner/hdfs.bam
19:49:05.011 INFO  DataNode - Receiving BP-2121563838-10.1.0.127-1752608944596:blk_1073741825_1001 src: /127.0.0.1:40186 dest: /127.0.0.1:38267
19:49:05.013 INFO  clienttrace - src: /127.0.0.1:40186, dest: /127.0.0.1:38267, bytes: 2396, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_929058819_1, offset: 0, srvID: c8a4cdde-161e-4f49-9c45-8b6ed26ed99f, blockid: BP-2121563838-10.1.0.127-1752608944596:blk_1073741825_1001, duration(ns): 584688
19:49:05.013 INFO  DataNode - PacketResponder: BP-2121563838-10.1.0.127-1752608944596:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating
19:49:05.013 INFO  FSNamesystem - BLOCK* blk_1073741825_1001 is COMMITTED but not COMPLETE(numNodes= 0 <  minimum = 1) in file /user/runner/hdfs.bam
19:49:05.414 INFO  StateChange - DIR* completeFile: /user/runner/hdfs.bam is closed by DFSClient_NONMAPREDUCE_929058819_1
19:49:05.415 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
19:49:05.415 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/user/runner/hdfs.bai	dst=null	perm=runner:supergroup:rw-r--r--	proto=rpc
19:49:05.416 INFO  StateChange - BLOCK* allocate blk_1073741826_1002, replicas=127.0.0.1:38267 for /user/runner/hdfs.bai
19:49:05.417 INFO  DataNode - Receiving BP-2121563838-10.1.0.127-1752608944596:blk_1073741826_1002 src: /127.0.0.1:40188 dest: /127.0.0.1:38267
19:49:05.418 INFO  clienttrace - src: /127.0.0.1:40188, dest: /127.0.0.1:38267, bytes: 656, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_929058819_1, offset: 0, srvID: c8a4cdde-161e-4f49-9c45-8b6ed26ed99f, blockid: BP-2121563838-10.1.0.127-1752608944596:blk_1073741826_1002, duration(ns): 336882
19:49:05.418 INFO  DataNode - PacketResponder: BP-2121563838-10.1.0.127-1752608944596:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating
19:49:05.418 INFO  StateChange - DIR* completeFile: /user/runner/hdfs.bai is closed by DFSClient_NONMAPREDUCE_929058819_1
19:49:05.420 INFO  MemoryStore - Block broadcast_612 stored as values in memory (estimated size 298.3 KiB, free 1919.7 MiB)
19:49:05.427 INFO  MemoryStore - Block broadcast_612_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
19:49:05.427 INFO  BlockManagerInfo - Added broadcast_612_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1920.0 MiB)
19:49:05.427 INFO  SparkContext - Created broadcast 612 from newAPIHadoopFile at PathSplitSource.java:96
19:49:05.447 INFO  FileInputFormat - Total input files to process : 1
19:49:05.482 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:212
19:49:05.482 INFO  DAGScheduler - Got job 226 (collect at ReadsSparkSourceUnitTest.java:212) with 1 output partitions
19:49:05.482 INFO  DAGScheduler - Final stage: ResultStage 290 (collect at ReadsSparkSourceUnitTest.java:212)
19:49:05.482 INFO  DAGScheduler - Parents of final stage: List()
19:49:05.482 INFO  DAGScheduler - Missing parents: List()
19:49:05.482 INFO  DAGScheduler - Submitting ResultStage 290 (MapPartitionsRDD[1476] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:05.505 INFO  MemoryStore - Block broadcast_613 stored as values in memory (estimated size 426.4 KiB, free 1919.2 MiB)
19:49:05.506 INFO  MemoryStore - Block broadcast_613_piece0 stored as bytes in memory (estimated size 154.0 KiB, free 1919.1 MiB)
19:49:05.506 INFO  BlockManagerInfo - Added broadcast_613_piece0 in memory on localhost:36125 (size: 154.0 KiB, free: 1919.8 MiB)
19:49:05.506 INFO  SparkContext - Created broadcast 613 from broadcast at DAGScheduler.scala:1580
19:49:05.507 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 290 (MapPartitionsRDD[1476] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:05.507 INFO  TaskSchedulerImpl - Adding task set 290.0 with 1 tasks resource profile 0
19:49:05.507 INFO  TaskSetManager - Starting task 0.0 in stage 290.0 (TID 377) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7899 bytes) 
19:49:05.507 INFO  Executor - Running task 0.0 in stage 290.0 (TID 377)
19:49:05.536 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/engine/spark/datasources/ReadsSparkSource/hdfs_file_test.bam:0+2396
19:49:05.539 INFO  Executor - Finished task 0.0 in stage 290.0 (TID 377). 5288 bytes result sent to driver
19:49:05.539 INFO  TaskSetManager - Finished task 0.0 in stage 290.0 (TID 377) in 32 ms on localhost (executor driver) (1/1)
19:49:05.539 INFO  TaskSchedulerImpl - Removed TaskSet 290.0, whose tasks have all completed, from pool 
19:49:05.540 INFO  DAGScheduler - ResultStage 290 (collect at ReadsSparkSourceUnitTest.java:212) finished in 0.057 s
19:49:05.540 INFO  DAGScheduler - Job 226 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:05.540 INFO  TaskSchedulerImpl - Killing all running tasks in stage 290: Stage finished
19:49:05.540 INFO  DAGScheduler - Job 226 finished: collect at ReadsSparkSourceUnitTest.java:212, took 0.057835 s
19:49:05.540 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
19:49:05.541 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
19:49:05.541 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
19:49:05.542 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
19:49:05.542 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam.bai	dst=null	perm=null	proto=rpc
19:49:05.542 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
19:49:05.543 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
19:49:05.543 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
19:49:05.545 WARN  DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
19:49:05.545 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
19:49:05.546 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
19:49:05.546 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam.sbi	dst=null	perm=null	proto=rpc
19:49:05.547 INFO  MemoryStore - Block broadcast_614 stored as values in memory (estimated size 298.2 KiB, free 1918.8 MiB)
19:49:05.553 INFO  MemoryStore - Block broadcast_614_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.8 MiB)
19:49:05.553 INFO  BlockManagerInfo - Added broadcast_614_piece0 in memory on localhost:36125 (size: 50.2 KiB, free: 1919.8 MiB)
19:49:05.553 INFO  SparkContext - Created broadcast 614 from newAPIHadoopFile at PathSplitSource.java:96
19:49:05.573 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
19:49:05.573 INFO  FileInputFormat - Total input files to process : 1
19:49:05.574 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
19:49:05.609 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:213
19:49:05.609 INFO  DAGScheduler - Got job 227 (collect at ReadsSparkSourceUnitTest.java:213) with 1 output partitions
19:49:05.609 INFO  DAGScheduler - Final stage: ResultStage 291 (collect at ReadsSparkSourceUnitTest.java:213)
19:49:05.609 INFO  DAGScheduler - Parents of final stage: List()
19:49:05.609 INFO  DAGScheduler - Missing parents: List()
19:49:05.609 INFO  DAGScheduler - Submitting ResultStage 291 (MapPartitionsRDD[1483] at filter at ReadsSparkSource.java:96), which has no missing parents
19:49:05.625 INFO  MemoryStore - Block broadcast_615 stored as values in memory (estimated size 426.4 KiB, free 1918.3 MiB)
19:49:05.627 INFO  MemoryStore - Block broadcast_615_piece0 stored as bytes in memory (estimated size 154.0 KiB, free 1918.2 MiB)
19:49:05.627 INFO  BlockManagerInfo - Added broadcast_615_piece0 in memory on localhost:36125 (size: 154.0 KiB, free: 1919.6 MiB)
19:49:05.627 INFO  SparkContext - Created broadcast 615 from broadcast at DAGScheduler.scala:1580
19:49:05.627 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 291 (MapPartitionsRDD[1483] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
19:49:05.627 INFO  TaskSchedulerImpl - Adding task set 291.0 with 1 tasks resource profile 0
19:49:05.627 INFO  TaskSetManager - Starting task 0.0 in stage 291.0 (TID 378) (localhost, executor driver, partition 0, ANY, 7796 bytes) 
19:49:05.628 INFO  Executor - Running task 0.0 in stage 291.0 (TID 378)
19:49:05.656 INFO  NewHadoopRDD - Input split: hdfs://localhost:42563/user/runner/hdfs.bam:0+2396
19:49:05.657 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
19:49:05.657 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
19:49:05.658 WARN  DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
19:49:05.658 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
19:49:05.659 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
19:49:05.659 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam.bai	dst=null	perm=null	proto=rpc
19:49:05.660 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
19:49:05.660 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
19:49:05.660 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
19:49:05.661 WARN  DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
19:49:05.662 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
19:49:05.663 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
19:49:05.663 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
19:49:05.663 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
19:49:05.664 WARN  DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
19:49:05.664 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
19:49:05.665 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
19:49:05.665 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam.bai	dst=null	perm=null	proto=rpc
19:49:05.665 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
19:49:05.666 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
19:49:05.666 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
19:49:05.667 WARN  DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
19:49:05.667 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
19:49:05.674 INFO  BlockManagerInfo - Removed broadcast_612_piece0 on localhost:36125 in memory (size: 50.2 KiB, free: 1919.7 MiB)
19:49:05.674 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
19:49:05.674 INFO  BlockManagerInfo - Removed broadcast_613_piece0 on localhost:36125 in memory (size: 154.0 KiB, free: 1919.8 MiB)
19:49:05.674 INFO  Executor - Finished task 0.0 in stage 291.0 (TID 378). 5331 bytes result sent to driver
19:49:05.675 INFO  TaskSetManager - Finished task 0.0 in stage 291.0 (TID 378) in 48 ms on localhost (executor driver) (1/1)
19:49:05.675 INFO  TaskSchedulerImpl - Removed TaskSet 291.0, whose tasks have all completed, from pool 
19:49:05.675 INFO  DAGScheduler - ResultStage 291 (collect at ReadsSparkSourceUnitTest.java:213) finished in 0.066 s
19:49:05.675 INFO  DAGScheduler - Job 227 is finished. Cancelling potential speculative or zombie tasks for this job
19:49:05.675 INFO  TaskSchedulerImpl - Killing all running tasks in stage 291: Stage finished
19:49:05.675 INFO  DAGScheduler - Job 227 finished: collect at ReadsSparkSourceUnitTest.java:213, took 0.066320 s
19:49:05.675 INFO  MiniDFSCluster - Shutting down the Mini HDFS Cluster
19:49:05.675 INFO  MiniDFSCluster - Shutting down DataNode 0
19:49:05.676 INFO  DirectoryScanner - Shutdown has been called
19:49:05.676 INFO  DataNode - Closing all peers.
19:49:05.676 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage5103056808776111610/data/data1, DS-363b8b20-76f4-40db-849d-3edc69317a05) exiting.
19:49:05.676 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage5103056808776111610/data/data2, DS-5618019b-0180-465a-b806-96b1f05f917a) exiting.
19:49:05.679 INFO  ContextHandler - Stopped o.e.j.w.WebAppContext@e65a19b{datanode,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
19:49:05.679 INFO  AbstractConnector - Stopped ServerConnector@72dff228{HTTP/1.1, (http/1.1)}{localhost:0}
19:49:05.679 INFO  session - node0 Stopped scavenging
19:49:05.679 INFO  ContextHandler - Stopped o.e.j.s.ServletContextHandler@4ff92e0b{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
19:49:05.680 INFO  DataNode - Waiting up to 30 seconds for transfer threads to complete
19:49:05.680 INFO  Server - Stopping server on 33111
19:49:05.680 INFO  Server - Stopping IPC Server listener on 0
19:49:05.680 INFO  Server - Stopping IPC Server Responder
19:49:05.681 ERROR DataNode - Command processor encountered interrupt and exit.
19:49:05.681 WARN  DataNode - Ending command processor service for: Thread[Command processor,5,main]
19:49:05.681 WARN  IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
19:49:05.681 WARN  DataNode - Ending block pool service for: Block pool BP-2121563838-10.1.0.127-1752608944596 (Datanode Uuid c8a4cdde-161e-4f49-9c45-8b6ed26ed99f) service to localhost/127.0.0.1:42563
19:49:05.681 INFO  DataNode - Removed Block pool BP-2121563838-10.1.0.127-1752608944596 (Datanode Uuid c8a4cdde-161e-4f49-9c45-8b6ed26ed99f)
19:49:05.681 INFO  FsDatasetImpl - Removing block pool BP-2121563838-10.1.0.127-1752608944596
19:49:05.681 WARN  IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
19:49:05.681 INFO  FsDatasetAsyncDiskService - Shutting down all async disk service threads
19:49:05.681 INFO  FsDatasetAsyncDiskService - All async disk service threads have been shut down
19:49:05.681 INFO  RamDiskAsyncLazyPersistService - Shutting down all async lazy persist service threads
19:49:05.681 INFO  RamDiskAsyncLazyPersistService - All async lazy persist service threads have been shut down
19:49:05.681 WARN  CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
19:49:05.681 WARN  CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
19:49:05.682 INFO  DataNode - Shutdown complete.
19:49:05.682 INFO  MiniDFSCluster - Shutting down the namenode
19:49:05.682 INFO  FSNamesystem - Stopping services started for active state
19:49:05.682 INFO  FSEditLog - Ending log segment 1, 13
19:49:05.682 INFO  FSNamesystem - NameNodeEditLogRoller was interrupted, exiting
19:49:05.682 INFO  FSNamesystem - LazyPersistFileScrubber was interrupted, exiting
19:49:05.682 INFO  FSEditLog - Number of transactions: 14 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 3 Number of syncs: 12 SyncTimes(ms): 1 2 
19:49:05.682 INFO  FileJournalManager - Finalizing edits file /tmp/minicluster_storage5103056808776111610/name-0-1/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage5103056808776111610/name-0-1/current/edits_0000000000000000001-0000000000000000014
19:49:05.683 INFO  FileJournalManager - Finalizing edits file /tmp/minicluster_storage5103056808776111610/name-0-2/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage5103056808776111610/name-0-2/current/edits_0000000000000000001-0000000000000000014
19:49:05.683 INFO  FSEditLog - FSEditLogAsync was interrupted, exiting
19:49:05.683 INFO  CacheReplicationMonitor - Shutting down CacheReplicationMonitor
19:49:05.683 INFO  Server - Stopping server on 42563
19:49:05.683 INFO  Server - Stopping IPC Server listener on 0
19:49:05.684 INFO  Server - Stopping IPC Server Responder
19:49:05.684 INFO  BlockManager - Stopping MarkedDeleteBlockScrubber.
19:49:05.684 INFO  BlockManager - Stopping RedundancyMonitor.
19:49:05.687 INFO  FSNamesystem - Stopping services started for active state
19:49:05.688 INFO  FSNamesystem - Stopping services started for standby state
19:49:05.689 INFO  ContextHandler - Stopped o.e.j.w.WebAppContext@b202102{hdfs,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
19:49:05.689 INFO  AbstractConnector - Stopped ServerConnector@799e0c89{HTTP/1.1, (http/1.1)}{localhost:0}
19:49:05.689 INFO  session - node0 Stopped scavenging
19:49:05.689 INFO  ContextHandler - Stopped o.e.j.s.ServletContextHandler@852121f{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
19:49:05.690 INFO  MetricsSystemImpl - Stopping DataNode metrics system...
19:49:05.690 INFO  MetricsSystemImpl - DataNode metrics system stopped.
19:49:05.690 INFO  MetricsSystemImpl - DataNode metrics system shutdown complete.