Class org.broadinstitute.hellbender.engine.spark.datasources.ReadsSparkSourceUnitTest

28

tests

0

failures

0

ignored

5.556s

duration

100%

successful

Tests

Test Duration Result
loadReadsNonExistentReference 0.001s passed
readsSparkSourceTestLenient[0](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam, null) 0.172s passed
readsSparkSourceTestLenient[1](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram, /home/runner/work/gatk/gatk/src/test/resources/human_g1k_v37.chr17_1Mb.fasta) 0.144s passed
readsSparkSourceTestSilent[0](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam, null) 0.167s passed
readsSparkSourceTestSilent[1](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram, /home/runner/work/gatk/gatk/src/test/resources/human_g1k_v37.chr17_1Mb.fasta) 0.146s passed
readsSparkSourceTestStrict[0](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam, null) 0.040s passed
readsSparkSourceTestStrict[1](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram, /home/runner/work/gatk/gatk/src/test/resources/human_g1k_v37.chr17_1Mb.fasta) 0.030s passed
readsSparkSourceTest[0](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam, null) 0.201s passed
readsSparkSourceTest[1](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam, null) 0.198s passed
readsSparkSourceTest[2](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.bam, null) 0.175s passed
readsSparkSourceTest[3](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram, /home/runner/work/gatk/gatk/src/test/resources/human_g1k_v37.chr17_1Mb.fasta) 0.143s passed
readsSparkSourceTest[4](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.cram, /home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.fasta) 0.135s passed
readsSparkSourceUnknownHostTest 0.005s passed
shardedReadsSparkSourceTest[0](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.bam, /home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.sharded.bam, null) 0.210s passed
testCRAMReferenceFromHDFS 1.648s passed
testCRAMReferenceRequired 0.001s passed
testCheckCRAMReference 0s passed
testGetHeader[0](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam, null) 0.040s passed
testGetHeader[1](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam, null) 0.029s passed
testGetHeader[2](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.bam, null) 0.029s passed
testGetHeader[3](/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram, /home/runner/work/gatk/gatk/src/test/resources/human_g1k_v37.chr17_1Mb.fasta) 0.023s passed
testGetHeader[4](/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.cram, /home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.fasta) 0.036s passed
testHeadersAreStripped 0.134s passed
testIntervals 0.097s passed
testIntervalsWithUnmapped 0.098s passed
testPartitionSizing 0.068s passed
testReadFromFileAndHDFS 1.586s passed
testReject2BitCRAMReference 0s passed

Standard error

14:51:57.173 INFO  MemoryStore - Block broadcast_536 stored as values in memory (estimated size 297.9 KiB, free 1918.1 MiB)
14:51:57.184 INFO  MemoryStore - Block broadcast_536_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.0 MiB)
14:51:57.184 INFO  BlockManagerInfo - Added broadcast_536_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.6 MiB)
14:51:57.185 INFO  SparkContext - Created broadcast 536 from newAPIHadoopFile at PathSplitSource.java:96
14:51:57.209 INFO  MemoryStore - Block broadcast_537 stored as values in memory (estimated size 297.9 KiB, free 1917.7 MiB)
14:51:57.220 INFO  MemoryStore - Block broadcast_537_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.7 MiB)
14:51:57.221 INFO  BlockManagerInfo - Added broadcast_537_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.5 MiB)
14:51:57.221 INFO  SparkContext - Created broadcast 537 from newAPIHadoopFile at PathSplitSource.java:96
14:51:57.259 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
14:51:57.260 INFO  DAGScheduler - Got job 201 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
14:51:57.260 INFO  DAGScheduler - Final stage: ResultStage 265 (collect at ReadsSparkSourceUnitTest.java:111)
14:51:57.260 INFO  DAGScheduler - Parents of final stage: List()
14:51:57.260 INFO  DAGScheduler - Missing parents: List()
14:51:57.260 INFO  DAGScheduler - Submitting ResultStage 265 (ParallelCollectionRDD[1270] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
14:51:57.260 INFO  MemoryStore - Block broadcast_538 stored as values in memory (estimated size 3.0 KiB, free 1917.7 MiB)
14:51:57.261 INFO  MemoryStore - Block broadcast_538_piece0 stored as bytes in memory (estimated size 1777.0 B, free 1917.7 MiB)
14:51:57.261 INFO  BlockManagerInfo - Added broadcast_538_piece0 in memory on localhost:44923 (size: 1777.0 B, free: 1919.5 MiB)
14:51:57.261 INFO  SparkContext - Created broadcast 538 from broadcast at DAGScheduler.scala:1580
14:51:57.261 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 265 (ParallelCollectionRDD[1270] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
14:51:57.261 INFO  TaskSchedulerImpl - Adding task set 265.0 with 4 tasks resource profile 0
14:51:57.262 INFO  TaskSetManager - Starting task 0.0 in stage 265.0 (TID 321) (localhost, executor driver, partition 0, PROCESS_LOCAL, 47080 bytes) 
14:51:57.263 INFO  TaskSetManager - Starting task 1.0 in stage 265.0 (TID 322) (localhost, executor driver, partition 1, PROCESS_LOCAL, 47194 bytes) 
14:51:57.263 INFO  TaskSetManager - Starting task 2.0 in stage 265.0 (TID 323) (localhost, executor driver, partition 2, PROCESS_LOCAL, 47105 bytes) 
14:51:57.263 INFO  TaskSetManager - Starting task 3.0 in stage 265.0 (TID 324) (localhost, executor driver, partition 3, PROCESS_LOCAL, 47530 bytes) 
14:51:57.263 INFO  Executor - Running task 0.0 in stage 265.0 (TID 321)
14:51:57.265 INFO  Executor - Running task 1.0 in stage 265.0 (TID 322)
14:51:57.265 INFO  Executor - Running task 3.0 in stage 265.0 (TID 324)
14:51:57.266 INFO  Executor - Finished task 1.0 in stage 265.0 (TID 322). 40426 bytes result sent to driver
14:51:57.266 INFO  Executor - Finished task 0.0 in stage 265.0 (TID 321). 40355 bytes result sent to driver
14:51:57.266 INFO  Executor - Running task 2.0 in stage 265.0 (TID 323)
14:51:57.266 INFO  Executor - Finished task 3.0 in stage 265.0 (TID 324). 40762 bytes result sent to driver
14:51:57.267 INFO  TaskSetManager - Finished task 1.0 in stage 265.0 (TID 322) in 5 ms on localhost (executor driver) (1/4)
14:51:57.267 INFO  Executor - Finished task 2.0 in stage 265.0 (TID 323). 40337 bytes result sent to driver
14:51:57.268 INFO  TaskSetManager - Finished task 3.0 in stage 265.0 (TID 324) in 4 ms on localhost (executor driver) (2/4)
14:51:57.268 INFO  TaskSetManager - Finished task 0.0 in stage 265.0 (TID 321) in 6 ms on localhost (executor driver) (3/4)
14:51:57.268 INFO  TaskSetManager - Finished task 2.0 in stage 265.0 (TID 323) in 5 ms on localhost (executor driver) (4/4)
14:51:57.268 INFO  TaskSchedulerImpl - Removed TaskSet 265.0, whose tasks have all completed, from pool 
14:51:57.269 INFO  DAGScheduler - ResultStage 265 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.008 s
14:51:57.269 INFO  DAGScheduler - Job 201 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:57.269 INFO  TaskSchedulerImpl - Killing all running tasks in stage 265: Stage finished
14:51:57.269 INFO  DAGScheduler - Job 201 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.009363 s
14:51:57.269 INFO  FileInputFormat - Total input files to process : 1
14:51:57.311 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
14:51:57.312 INFO  DAGScheduler - Got job 202 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
14:51:57.312 INFO  DAGScheduler - Final stage: ResultStage 266 (collect at ReadsSparkSourceUnitTest.java:112)
14:51:57.312 INFO  DAGScheduler - Parents of final stage: List()
14:51:57.312 INFO  DAGScheduler - Missing parents: List()
14:51:57.312 INFO  DAGScheduler - Submitting ResultStage 266 (MapPartitionsRDD[1277] at filter at ReadsSparkSource.java:96), which has no missing parents
14:51:57.331 INFO  MemoryStore - Block broadcast_539 stored as values in memory (estimated size 426.2 KiB, free 1917.3 MiB)
14:51:57.333 INFO  MemoryStore - Block broadcast_539_piece0 stored as bytes in memory (estimated size 153.6 KiB, free 1917.1 MiB)
14:51:57.333 INFO  BlockManagerInfo - Added broadcast_539_piece0 in memory on localhost:44923 (size: 153.6 KiB, free: 1919.4 MiB)
14:51:57.333 INFO  SparkContext - Created broadcast 539 from broadcast at DAGScheduler.scala:1580
14:51:57.333 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 266 (MapPartitionsRDD[1277] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:51:57.334 INFO  TaskSchedulerImpl - Adding task set 266.0 with 1 tasks resource profile 0
14:51:57.334 INFO  TaskSetManager - Starting task 0.0 in stage 266.0 (TID 325) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes) 
14:51:57.335 INFO  Executor - Running task 0.0 in stage 266.0 (TID 325)
14:51:57.366 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
14:51:57.370 INFO  Executor - Finished task 0.0 in stage 266.0 (TID 325). 159572 bytes result sent to driver
14:51:57.370 INFO  TaskSetManager - Finished task 0.0 in stage 266.0 (TID 325) in 36 ms on localhost (executor driver) (1/1)
14:51:57.370 INFO  TaskSchedulerImpl - Removed TaskSet 266.0, whose tasks have all completed, from pool 
14:51:57.371 INFO  DAGScheduler - ResultStage 266 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.059 s
14:51:57.371 INFO  DAGScheduler - Job 202 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:57.371 INFO  TaskSchedulerImpl - Killing all running tasks in stage 266: Stage finished
14:51:57.371 INFO  DAGScheduler - Job 202 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.059416 s
14:51:57.374 INFO  MemoryStore - Block broadcast_540 stored as values in memory (estimated size 298.0 KiB, free 1916.8 MiB)
14:51:57.380 INFO  MemoryStore - Block broadcast_540_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1916.8 MiB)
14:51:57.380 INFO  BlockManagerInfo - Added broadcast_540_piece0 in memory on localhost:44923 (size: 50.3 KiB, free: 1919.3 MiB)
14:51:57.381 INFO  SparkContext - Created broadcast 540 from newAPIHadoopFile at PathSplitSource.java:96
14:51:57.407 INFO  MemoryStore - Block broadcast_541 stored as values in memory (estimated size 298.0 KiB, free 1916.5 MiB)
14:51:57.418 INFO  MemoryStore - Block broadcast_541_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1916.4 MiB)
14:51:57.418 INFO  BlockManagerInfo - Added broadcast_541_piece0 in memory on localhost:44923 (size: 50.3 KiB, free: 1919.3 MiB)
14:51:57.419 INFO  SparkContext - Created broadcast 541 from newAPIHadoopFile at PathSplitSource.java:96
14:51:57.443 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
14:51:57.443 INFO  DAGScheduler - Got job 203 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
14:51:57.443 INFO  DAGScheduler - Final stage: ResultStage 267 (collect at ReadsSparkSourceUnitTest.java:111)
14:51:57.443 INFO  DAGScheduler - Parents of final stage: List()
14:51:57.443 INFO  DAGScheduler - Missing parents: List()
14:51:57.443 INFO  DAGScheduler - Submitting ResultStage 267 (ParallelCollectionRDD[1283] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
14:51:57.444 INFO  MemoryStore - Block broadcast_542 stored as values in memory (estimated size 3.0 KiB, free 1916.4 MiB)
14:51:57.444 INFO  MemoryStore - Block broadcast_542_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1916.4 MiB)
14:51:57.444 INFO  BlockManagerInfo - Added broadcast_542_piece0 in memory on localhost:44923 (size: 1778.0 B, free: 1919.3 MiB)
14:51:57.444 INFO  SparkContext - Created broadcast 542 from broadcast at DAGScheduler.scala:1580
14:51:57.445 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 267 (ParallelCollectionRDD[1283] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
14:51:57.445 INFO  TaskSchedulerImpl - Adding task set 267.0 with 4 tasks resource profile 0
14:51:57.445 INFO  TaskSetManager - Starting task 0.0 in stage 267.0 (TID 326) (localhost, executor driver, partition 0, PROCESS_LOCAL, 169886 bytes) 
14:51:57.446 INFO  TaskSetManager - Starting task 1.0 in stage 267.0 (TID 327) (localhost, executor driver, partition 1, PROCESS_LOCAL, 170093 bytes) 
14:51:57.446 INFO  TaskSetManager - Starting task 2.0 in stage 267.0 (TID 328) (localhost, executor driver, partition 2, PROCESS_LOCAL, 170081 bytes) 
14:51:57.447 INFO  TaskSetManager - Starting task 3.0 in stage 267.0 (TID 329) (localhost, executor driver, partition 3, PROCESS_LOCAL, 169419 bytes) 
14:51:57.447 INFO  Executor - Running task 3.0 in stage 267.0 (TID 329)
14:51:57.447 INFO  Executor - Running task 0.0 in stage 267.0 (TID 326)
14:51:57.447 INFO  Executor - Running task 2.0 in stage 267.0 (TID 328)
14:51:57.447 INFO  Executor - Running task 1.0 in stage 267.0 (TID 327)
14:51:57.450 INFO  Executor - Finished task 1.0 in stage 267.0 (TID 327). 163368 bytes result sent to driver
14:51:57.450 INFO  Executor - Finished task 0.0 in stage 267.0 (TID 326). 163161 bytes result sent to driver
14:51:57.451 INFO  TaskSetManager - Finished task 0.0 in stage 267.0 (TID 326) in 6 ms on localhost (executor driver) (1/4)
14:51:57.451 INFO  Executor - Finished task 3.0 in stage 267.0 (TID 329). 162694 bytes result sent to driver
14:51:57.452 INFO  TaskSetManager - Finished task 1.0 in stage 267.0 (TID 327) in 7 ms on localhost (executor driver) (2/4)
14:51:57.452 INFO  TaskSetManager - Finished task 3.0 in stage 267.0 (TID 329) in 6 ms on localhost (executor driver) (3/4)
14:51:57.459 INFO  Executor - Finished task 2.0 in stage 267.0 (TID 328). 163442 bytes result sent to driver
14:51:57.459 INFO  BlockManagerInfo - Removed broadcast_532_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.3 MiB)
14:51:57.460 INFO  BlockManagerInfo - Removed broadcast_536_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.4 MiB)
14:51:57.460 INFO  TaskSetManager - Finished task 2.0 in stage 267.0 (TID 328) in 14 ms on localhost (executor driver) (4/4)
14:51:57.460 INFO  BlockManagerInfo - Removed broadcast_525_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.4 MiB)
14:51:57.460 INFO  TaskSchedulerImpl - Removed TaskSet 267.0, whose tasks have all completed, from pool 
14:51:57.460 INFO  DAGScheduler - ResultStage 267 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.017 s
14:51:57.460 INFO  DAGScheduler - Job 203 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:57.460 INFO  TaskSchedulerImpl - Killing all running tasks in stage 267: Stage finished
14:51:57.460 INFO  DAGScheduler - Job 203 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.017663 s
14:51:57.461 INFO  BlockManagerInfo - Removed broadcast_537_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.5 MiB)
14:51:57.461 INFO  FileInputFormat - Total input files to process : 1
14:51:57.462 INFO  BlockManagerInfo - Removed broadcast_533_piece0 on localhost:44923 in memory (size: 54.5 KiB, free: 1919.5 MiB)
14:51:57.462 INFO  BlockManagerInfo - Removed broadcast_538_piece0 on localhost:44923 in memory (size: 1777.0 B, free: 1919.5 MiB)
14:51:57.462 INFO  BlockManagerInfo - Removed broadcast_531_piece0 on localhost:44923 in memory (size: 233.0 B, free: 1919.5 MiB)
14:51:57.463 INFO  BlockManagerInfo - Removed broadcast_539_piece0 on localhost:44923 in memory (size: 153.6 KiB, free: 1919.7 MiB)
14:51:57.463 INFO  BlockManagerInfo - Removed broadcast_540_piece0 on localhost:44923 in memory (size: 50.3 KiB, free: 1919.7 MiB)
14:51:57.464 INFO  BlockManagerInfo - Removed broadcast_534_piece0 on localhost:44923 in memory (size: 153.6 KiB, free: 1919.9 MiB)
14:51:57.464 INFO  BlockManagerInfo - Removed broadcast_535_piece0 on localhost:44923 in memory (size: 54.5 KiB, free: 1919.9 MiB)
14:51:57.499 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
14:51:57.499 INFO  DAGScheduler - Got job 204 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
14:51:57.499 INFO  DAGScheduler - Final stage: ResultStage 268 (collect at ReadsSparkSourceUnitTest.java:112)
14:51:57.499 INFO  DAGScheduler - Parents of final stage: List()
14:51:57.499 INFO  DAGScheduler - Missing parents: List()
14:51:57.499 INFO  DAGScheduler - Submitting ResultStage 268 (MapPartitionsRDD[1290] at filter at ReadsSparkSource.java:96), which has no missing parents
14:51:57.516 INFO  MemoryStore - Block broadcast_543 stored as values in memory (estimated size 426.2 KiB, free 1919.2 MiB)
14:51:57.518 INFO  MemoryStore - Block broadcast_543_piece0 stored as bytes in memory (estimated size 153.6 KiB, free 1919.1 MiB)
14:51:57.518 INFO  BlockManagerInfo - Added broadcast_543_piece0 in memory on localhost:44923 (size: 153.6 KiB, free: 1919.8 MiB)
14:51:57.518 INFO  SparkContext - Created broadcast 543 from broadcast at DAGScheduler.scala:1580
14:51:57.518 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 268 (MapPartitionsRDD[1290] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:51:57.518 INFO  TaskSchedulerImpl - Adding task set 268.0 with 1 tasks resource profile 0
14:51:57.519 INFO  TaskSetManager - Starting task 0.0 in stage 268.0 (TID 330) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7883 bytes) 
14:51:57.519 INFO  Executor - Running task 0.0 in stage 268.0 (TID 330)
14:51:57.552 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam:0+221635
14:51:57.567 INFO  Executor - Finished task 0.0 in stage 268.0 (TID 330). 650184 bytes result sent to driver
14:51:57.569 INFO  TaskSetManager - Finished task 0.0 in stage 268.0 (TID 330) in 50 ms on localhost (executor driver) (1/1)
14:51:57.569 INFO  TaskSchedulerImpl - Removed TaskSet 268.0, whose tasks have all completed, from pool 
14:51:57.569 INFO  DAGScheduler - ResultStage 268 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.069 s
14:51:57.569 INFO  DAGScheduler - Job 204 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:57.569 INFO  TaskSchedulerImpl - Killing all running tasks in stage 268: Stage finished
14:51:57.569 INFO  DAGScheduler - Job 204 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.070558 s
14:51:57.572 INFO  MemoryStore - Block broadcast_544 stored as values in memory (estimated size 298.0 KiB, free 1918.8 MiB)
14:51:57.579 INFO  MemoryStore - Block broadcast_544_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.7 MiB)
14:51:57.579 INFO  BlockManagerInfo - Added broadcast_544_piece0 in memory on localhost:44923 (size: 50.3 KiB, free: 1919.8 MiB)
14:51:57.579 INFO  SparkContext - Created broadcast 544 from newAPIHadoopFile at PathSplitSource.java:96
14:51:57.604 INFO  MemoryStore - Block broadcast_545 stored as values in memory (estimated size 298.0 KiB, free 1918.5 MiB)
14:51:57.611 INFO  MemoryStore - Block broadcast_545_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.4 MiB)
14:51:57.611 INFO  BlockManagerInfo - Added broadcast_545_piece0 in memory on localhost:44923 (size: 50.3 KiB, free: 1919.7 MiB)
14:51:57.611 INFO  SparkContext - Created broadcast 545 from newAPIHadoopFile at PathSplitSource.java:96
14:51:57.635 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
14:51:57.635 INFO  DAGScheduler - Got job 205 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
14:51:57.635 INFO  DAGScheduler - Final stage: ResultStage 269 (collect at ReadsSparkSourceUnitTest.java:111)
14:51:57.635 INFO  DAGScheduler - Parents of final stage: List()
14:51:57.636 INFO  DAGScheduler - Missing parents: List()
14:51:57.636 INFO  DAGScheduler - Submitting ResultStage 269 (ParallelCollectionRDD[1296] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
14:51:57.636 INFO  MemoryStore - Block broadcast_546 stored as values in memory (estimated size 3.0 KiB, free 1918.4 MiB)
14:51:57.637 INFO  MemoryStore - Block broadcast_546_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.4 MiB)
14:51:57.637 INFO  BlockManagerInfo - Added broadcast_546_piece0 in memory on localhost:44923 (size: 1778.0 B, free: 1919.7 MiB)
14:51:57.637 INFO  SparkContext - Created broadcast 546 from broadcast at DAGScheduler.scala:1580
14:51:57.637 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 269 (ParallelCollectionRDD[1296] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
14:51:57.637 INFO  TaskSchedulerImpl - Adding task set 269.0 with 4 tasks resource profile 0
14:51:57.638 INFO  TaskSetManager - Starting task 0.0 in stage 269.0 (TID 331) (localhost, executor driver, partition 0, PROCESS_LOCAL, 169886 bytes) 
14:51:57.638 INFO  TaskSetManager - Starting task 1.0 in stage 269.0 (TID 332) (localhost, executor driver, partition 1, PROCESS_LOCAL, 170093 bytes) 
14:51:57.638 INFO  TaskSetManager - Starting task 2.0 in stage 269.0 (TID 333) (localhost, executor driver, partition 2, PROCESS_LOCAL, 170081 bytes) 
14:51:57.639 INFO  TaskSetManager - Starting task 3.0 in stage 269.0 (TID 334) (localhost, executor driver, partition 3, PROCESS_LOCAL, 169419 bytes) 
14:51:57.639 INFO  Executor - Running task 0.0 in stage 269.0 (TID 331)
14:51:57.639 INFO  Executor - Running task 1.0 in stage 269.0 (TID 332)
14:51:57.639 INFO  Executor - Running task 2.0 in stage 269.0 (TID 333)
14:51:57.639 INFO  Executor - Running task 3.0 in stage 269.0 (TID 334)
14:51:57.641 INFO  Executor - Finished task 0.0 in stage 269.0 (TID 331). 163161 bytes result sent to driver
14:51:57.641 INFO  Executor - Finished task 1.0 in stage 269.0 (TID 332). 163368 bytes result sent to driver
14:51:57.641 INFO  Executor - Finished task 3.0 in stage 269.0 (TID 334). 162694 bytes result sent to driver
14:51:57.641 INFO  Executor - Finished task 2.0 in stage 269.0 (TID 333). 163356 bytes result sent to driver
14:51:57.642 INFO  TaskSetManager - Finished task 0.0 in stage 269.0 (TID 331) in 5 ms on localhost (executor driver) (1/4)
14:51:57.642 INFO  TaskSetManager - Finished task 3.0 in stage 269.0 (TID 334) in 4 ms on localhost (executor driver) (2/4)
14:51:57.642 INFO  TaskSetManager - Finished task 1.0 in stage 269.0 (TID 332) in 4 ms on localhost (executor driver) (3/4)
14:51:57.642 INFO  TaskSetManager - Finished task 2.0 in stage 269.0 (TID 333) in 4 ms on localhost (executor driver) (4/4)
14:51:57.642 INFO  TaskSchedulerImpl - Removed TaskSet 269.0, whose tasks have all completed, from pool 
14:51:57.642 INFO  DAGScheduler - ResultStage 269 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.006 s
14:51:57.642 INFO  DAGScheduler - Job 205 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:57.642 INFO  TaskSchedulerImpl - Killing all running tasks in stage 269: Stage finished
14:51:57.642 INFO  DAGScheduler - Job 205 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.007174 s
14:51:57.643 INFO  FileInputFormat - Total input files to process : 1
14:51:57.679 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
14:51:57.680 INFO  DAGScheduler - Got job 206 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
14:51:57.680 INFO  DAGScheduler - Final stage: ResultStage 270 (collect at ReadsSparkSourceUnitTest.java:112)
14:51:57.680 INFO  DAGScheduler - Parents of final stage: List()
14:51:57.680 INFO  DAGScheduler - Missing parents: List()
14:51:57.680 INFO  DAGScheduler - Submitting ResultStage 270 (MapPartitionsRDD[1303] at filter at ReadsSparkSource.java:96), which has no missing parents
14:51:57.697 INFO  MemoryStore - Block broadcast_547 stored as values in memory (estimated size 426.2 KiB, free 1918.0 MiB)
14:51:57.698 INFO  MemoryStore - Block broadcast_547_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1917.8 MiB)
14:51:57.699 INFO  BlockManagerInfo - Added broadcast_547_piece0 in memory on localhost:44923 (size: 153.7 KiB, free: 1919.5 MiB)
14:51:57.699 INFO  SparkContext - Created broadcast 547 from broadcast at DAGScheduler.scala:1580
14:51:57.699 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 270 (MapPartitionsRDD[1303] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:51:57.699 INFO  TaskSchedulerImpl - Adding task set 270.0 with 1 tasks resource profile 0
14:51:57.699 INFO  TaskSetManager - Starting task 0.0 in stage 270.0 (TID 335) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7912 bytes) 
14:51:57.700 INFO  Executor - Running task 0.0 in stage 270.0 (TID 335)
14:51:57.731 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.bam:0+216896
14:51:57.743 INFO  Executor - Finished task 0.0 in stage 270.0 (TID 335). 650141 bytes result sent to driver
14:51:57.745 INFO  TaskSetManager - Finished task 0.0 in stage 270.0 (TID 335) in 46 ms on localhost (executor driver) (1/1)
14:51:57.745 INFO  TaskSchedulerImpl - Removed TaskSet 270.0, whose tasks have all completed, from pool 
14:51:57.745 INFO  DAGScheduler - ResultStage 270 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.065 s
14:51:57.745 INFO  DAGScheduler - Job 206 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:57.745 INFO  TaskSchedulerImpl - Killing all running tasks in stage 270: Stage finished
14:51:57.745 INFO  DAGScheduler - Job 206 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.065642 s
14:51:57.748 INFO  MemoryStore - Block broadcast_548 stored as values in memory (estimated size 536.0 B, free 1917.8 MiB)
14:51:57.748 INFO  MemoryStore - Block broadcast_548_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.8 MiB)
14:51:57.748 INFO  BlockManagerInfo - Added broadcast_548_piece0 in memory on localhost:44923 (size: 187.0 B, free: 1919.5 MiB)
14:51:57.748 INFO  SparkContext - Created broadcast 548 from broadcast at CramSource.java:114
14:51:57.749 INFO  MemoryStore - Block broadcast_549 stored as values in memory (estimated size 297.9 KiB, free 1917.5 MiB)
14:51:57.755 INFO  MemoryStore - Block broadcast_549_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.5 MiB)
14:51:57.756 INFO  BlockManagerInfo - Added broadcast_549_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.5 MiB)
14:51:57.756 INFO  SparkContext - Created broadcast 549 from newAPIHadoopFile at PathSplitSource.java:96
14:51:57.776 INFO  MemoryStore - Block broadcast_550 stored as values in memory (estimated size 536.0 B, free 1917.5 MiB)
14:51:57.777 INFO  MemoryStore - Block broadcast_550_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.5 MiB)
14:51:57.777 INFO  BlockManagerInfo - Added broadcast_550_piece0 in memory on localhost:44923 (size: 187.0 B, free: 1919.5 MiB)
14:51:57.777 INFO  SparkContext - Created broadcast 550 from broadcast at CramSource.java:114
14:51:57.778 INFO  MemoryStore - Block broadcast_551 stored as values in memory (estimated size 297.9 KiB, free 1917.2 MiB)
14:51:57.784 INFO  MemoryStore - Block broadcast_551_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.2 MiB)
14:51:57.784 INFO  BlockManagerInfo - Added broadcast_551_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.5 MiB)
14:51:57.784 INFO  SparkContext - Created broadcast 551 from newAPIHadoopFile at PathSplitSource.java:96
14:51:57.801 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
14:51:57.802 INFO  DAGScheduler - Got job 207 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
14:51:57.802 INFO  DAGScheduler - Final stage: ResultStage 271 (collect at ReadsSparkSourceUnitTest.java:111)
14:51:57.802 INFO  DAGScheduler - Parents of final stage: List()
14:51:57.802 INFO  DAGScheduler - Missing parents: List()
14:51:57.802 INFO  DAGScheduler - Submitting ResultStage 271 (ParallelCollectionRDD[1308] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
14:51:57.802 INFO  MemoryStore - Block broadcast_552 stored as values in memory (estimated size 3.0 KiB, free 1917.2 MiB)
14:51:57.803 INFO  MemoryStore - Block broadcast_552_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1917.2 MiB)
14:51:57.803 INFO  BlockManagerInfo - Added broadcast_552_piece0 in memory on localhost:44923 (size: 1778.0 B, free: 1919.4 MiB)
14:51:57.803 INFO  SparkContext - Created broadcast 552 from broadcast at DAGScheduler.scala:1580
14:51:57.803 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 271 (ParallelCollectionRDD[1308] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
14:51:57.803 INFO  TaskSchedulerImpl - Adding task set 271.0 with 4 tasks resource profile 0
14:51:57.804 INFO  TaskSetManager - Starting task 0.0 in stage 271.0 (TID 336) (localhost, executor driver, partition 0, PROCESS_LOCAL, 45675 bytes) 
14:51:57.804 INFO  TaskSetManager - Starting task 1.0 in stage 271.0 (TID 337) (localhost, executor driver, partition 1, PROCESS_LOCAL, 45855 bytes) 
14:51:57.805 INFO  TaskSetManager - Starting task 2.0 in stage 271.0 (TID 338) (localhost, executor driver, partition 2, PROCESS_LOCAL, 45795 bytes) 
14:51:57.805 INFO  TaskSetManager - Starting task 3.0 in stage 271.0 (TID 339) (localhost, executor driver, partition 3, PROCESS_LOCAL, 46068 bytes) 
14:51:57.805 INFO  Executor - Running task 1.0 in stage 271.0 (TID 337)
14:51:57.805 INFO  Executor - Running task 2.0 in stage 271.0 (TID 338)
14:51:57.805 INFO  Executor - Running task 3.0 in stage 271.0 (TID 339)
14:51:57.805 INFO  Executor - Running task 0.0 in stage 271.0 (TID 336)
14:51:57.807 INFO  Executor - Finished task 2.0 in stage 271.0 (TID 338). 39027 bytes result sent to driver
14:51:57.807 INFO  Executor - Finished task 0.0 in stage 271.0 (TID 336). 38907 bytes result sent to driver
14:51:57.807 INFO  Executor - Finished task 1.0 in stage 271.0 (TID 337). 39130 bytes result sent to driver
14:51:57.807 INFO  TaskSetManager - Finished task 2.0 in stage 271.0 (TID 338) in 3 ms on localhost (executor driver) (1/4)
14:51:57.808 INFO  Executor - Finished task 3.0 in stage 271.0 (TID 339). 39300 bytes result sent to driver
14:51:57.808 INFO  TaskSetManager - Finished task 1.0 in stage 271.0 (TID 337) in 4 ms on localhost (executor driver) (2/4)
14:51:57.808 INFO  TaskSetManager - Finished task 0.0 in stage 271.0 (TID 336) in 4 ms on localhost (executor driver) (3/4)
14:51:57.808 INFO  TaskSetManager - Finished task 3.0 in stage 271.0 (TID 339) in 3 ms on localhost (executor driver) (4/4)
14:51:57.808 INFO  TaskSchedulerImpl - Removed TaskSet 271.0, whose tasks have all completed, from pool 
14:51:57.808 INFO  DAGScheduler - ResultStage 271 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.006 s
14:51:57.808 INFO  DAGScheduler - Job 207 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:57.808 INFO  TaskSchedulerImpl - Killing all running tasks in stage 271: Stage finished
14:51:57.808 INFO  DAGScheduler - Job 207 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.007154 s
14:51:57.809 INFO  FileInputFormat - Total input files to process : 1
14:51:57.835 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
14:51:57.836 INFO  DAGScheduler - Got job 208 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
14:51:57.836 INFO  DAGScheduler - Final stage: ResultStage 272 (collect at ReadsSparkSourceUnitTest.java:112)
14:51:57.836 INFO  DAGScheduler - Parents of final stage: List()
14:51:57.836 INFO  DAGScheduler - Missing parents: List()
14:51:57.836 INFO  DAGScheduler - Submitting ResultStage 272 (MapPartitionsRDD[1314] at filter at ReadsSparkSource.java:96), which has no missing parents
14:51:57.847 INFO  MemoryStore - Block broadcast_553 stored as values in memory (estimated size 286.8 KiB, free 1916.9 MiB)
14:51:57.848 INFO  MemoryStore - Block broadcast_553_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1916.8 MiB)
14:51:57.848 INFO  BlockManagerInfo - Added broadcast_553_piece0 in memory on localhost:44923 (size: 103.6 KiB, free: 1919.3 MiB)
14:51:57.848 INFO  SparkContext - Created broadcast 553 from broadcast at DAGScheduler.scala:1580
14:51:57.849 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 272 (MapPartitionsRDD[1314] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:51:57.849 INFO  TaskSchedulerImpl - Adding task set 272.0 with 1 tasks resource profile 0
14:51:57.849 INFO  TaskSetManager - Starting task 0.0 in stage 272.0 (TID 340) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes) 
14:51:57.849 INFO  Executor - Running task 0.0 in stage 272.0 (TID 340)
14:51:57.874 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
14:51:57.887 INFO  BlockManagerInfo - Removed broadcast_546_piece0 on localhost:44923 in memory (size: 1778.0 B, free: 1919.3 MiB)
14:51:57.887 INFO  BlockManagerInfo - Removed broadcast_543_piece0 on localhost:44923 in memory (size: 153.6 KiB, free: 1919.5 MiB)
14:51:57.887 INFO  Executor - Finished task 0.0 in stage 272.0 (TID 340). 154144 bytes result sent to driver
14:51:57.888 INFO  BlockManagerInfo - Removed broadcast_544_piece0 on localhost:44923 in memory (size: 50.3 KiB, free: 1919.5 MiB)
14:51:57.888 INFO  BlockManagerInfo - Removed broadcast_548_piece0 on localhost:44923 in memory (size: 187.0 B, free: 1919.5 MiB)
14:51:57.888 INFO  TaskSetManager - Finished task 0.0 in stage 272.0 (TID 340) in 39 ms on localhost (executor driver) (1/1)
14:51:57.888 INFO  TaskSchedulerImpl - Removed TaskSet 272.0, whose tasks have all completed, from pool 
14:51:57.888 INFO  DAGScheduler - ResultStage 272 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.052 s
14:51:57.889 INFO  DAGScheduler - Job 208 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:57.889 INFO  TaskSchedulerImpl - Killing all running tasks in stage 272: Stage finished
14:51:57.889 INFO  DAGScheduler - Job 208 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.053457 s
14:51:57.889 INFO  BlockManagerInfo - Removed broadcast_547_piece0 on localhost:44923 in memory (size: 153.7 KiB, free: 1919.7 MiB)
14:51:57.890 INFO  BlockManagerInfo - Removed broadcast_549_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.7 MiB)
14:51:57.890 INFO  BlockManagerInfo - Removed broadcast_545_piece0 on localhost:44923 in memory (size: 50.3 KiB, free: 1919.8 MiB)
14:51:57.890 INFO  BlockManagerInfo - Removed broadcast_542_piece0 on localhost:44923 in memory (size: 1778.0 B, free: 1919.8 MiB)
14:51:57.891 INFO  BlockManagerInfo - Removed broadcast_541_piece0 on localhost:44923 in memory (size: 50.3 KiB, free: 1919.8 MiB)
14:51:57.891 INFO  BlockManagerInfo - Removed broadcast_552_piece0 on localhost:44923 in memory (size: 1778.0 B, free: 1919.8 MiB)
14:51:57.892 INFO  MemoryStore - Block broadcast_554 stored as values in memory (estimated size 608.0 B, free 1919.3 MiB)
14:51:57.893 INFO  MemoryStore - Block broadcast_554_piece0 stored as bytes in memory (estimated size 206.0 B, free 1919.3 MiB)
14:51:57.893 INFO  BlockManagerInfo - Added broadcast_554_piece0 in memory on localhost:44923 (size: 206.0 B, free: 1919.8 MiB)
14:51:57.893 INFO  SparkContext - Created broadcast 554 from broadcast at CramSource.java:114
14:51:57.894 INFO  MemoryStore - Block broadcast_555 stored as values in memory (estimated size 297.9 KiB, free 1919.0 MiB)
14:51:57.900 INFO  MemoryStore - Block broadcast_555_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.9 MiB)
14:51:57.900 INFO  BlockManagerInfo - Added broadcast_555_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.8 MiB)
14:51:57.900 INFO  SparkContext - Created broadcast 555 from newAPIHadoopFile at PathSplitSource.java:96
14:51:57.920 INFO  MemoryStore - Block broadcast_556 stored as values in memory (estimated size 608.0 B, free 1918.9 MiB)
14:51:57.921 INFO  MemoryStore - Block broadcast_556_piece0 stored as bytes in memory (estimated size 206.0 B, free 1918.9 MiB)
14:51:57.921 INFO  BlockManagerInfo - Added broadcast_556_piece0 in memory on localhost:44923 (size: 206.0 B, free: 1919.8 MiB)
14:51:57.921 INFO  SparkContext - Created broadcast 556 from broadcast at CramSource.java:114
14:51:57.922 INFO  MemoryStore - Block broadcast_557 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
14:51:57.929 INFO  MemoryStore - Block broadcast_557_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
14:51:57.929 INFO  BlockManagerInfo - Added broadcast_557_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.8 MiB)
14:51:57.929 INFO  SparkContext - Created broadcast 557 from newAPIHadoopFile at PathSplitSource.java:96
14:51:57.947 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
14:51:57.947 INFO  DAGScheduler - Got job 209 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
14:51:57.948 INFO  DAGScheduler - Final stage: ResultStage 273 (collect at ReadsSparkSourceUnitTest.java:111)
14:51:57.948 INFO  DAGScheduler - Parents of final stage: List()
14:51:57.948 INFO  DAGScheduler - Missing parents: List()
14:51:57.948 INFO  DAGScheduler - Submitting ResultStage 273 (ParallelCollectionRDD[1319] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
14:51:57.948 INFO  MemoryStore - Block broadcast_558 stored as values in memory (estimated size 3.0 KiB, free 1918.6 MiB)
14:51:57.949 INFO  MemoryStore - Block broadcast_558_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.6 MiB)
14:51:57.949 INFO  BlockManagerInfo - Added broadcast_558_piece0 in memory on localhost:44923 (size: 1778.0 B, free: 1919.7 MiB)
14:51:57.949 INFO  SparkContext - Created broadcast 558 from broadcast at DAGScheduler.scala:1580
14:51:57.949 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 273 (ParallelCollectionRDD[1319] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
14:51:57.949 INFO  TaskSchedulerImpl - Adding task set 273.0 with 4 tasks resource profile 0
14:51:57.950 INFO  TaskSetManager - Starting task 0.0 in stage 273.0 (TID 341) (localhost, executor driver, partition 0, PROCESS_LOCAL, 8303 bytes) 
14:51:57.950 INFO  TaskSetManager - Starting task 1.0 in stage 273.0 (TID 342) (localhost, executor driver, partition 1, PROCESS_LOCAL, 8292 bytes) 
14:51:57.950 INFO  TaskSetManager - Starting task 2.0 in stage 273.0 (TID 343) (localhost, executor driver, partition 2, PROCESS_LOCAL, 8292 bytes) 
14:51:57.950 INFO  TaskSetManager - Starting task 3.0 in stage 273.0 (TID 344) (localhost, executor driver, partition 3, PROCESS_LOCAL, 8184 bytes) 
14:51:57.951 INFO  Executor - Running task 2.0 in stage 273.0 (TID 343)
14:51:57.951 INFO  Executor - Running task 0.0 in stage 273.0 (TID 341)
14:51:57.951 INFO  Executor - Running task 1.0 in stage 273.0 (TID 342)
14:51:57.951 INFO  Executor - Running task 3.0 in stage 273.0 (TID 344)
14:51:57.952 INFO  Executor - Finished task 3.0 in stage 273.0 (TID 344). 1416 bytes result sent to driver
14:51:57.952 INFO  Executor - Finished task 0.0 in stage 273.0 (TID 341). 1535 bytes result sent to driver
14:51:57.952 INFO  TaskSetManager - Finished task 3.0 in stage 273.0 (TID 344) in 2 ms on localhost (executor driver) (1/4)
14:51:57.953 INFO  TaskSetManager - Finished task 0.0 in stage 273.0 (TID 341) in 3 ms on localhost (executor driver) (2/4)
14:51:57.953 INFO  Executor - Finished task 2.0 in stage 273.0 (TID 343). 1567 bytes result sent to driver
14:51:57.954 INFO  Executor - Finished task 1.0 in stage 273.0 (TID 342). 1567 bytes result sent to driver
14:51:57.954 INFO  TaskSetManager - Finished task 2.0 in stage 273.0 (TID 343) in 4 ms on localhost (executor driver) (3/4)
14:51:57.954 INFO  TaskSetManager - Finished task 1.0 in stage 273.0 (TID 342) in 4 ms on localhost (executor driver) (4/4)
14:51:57.954 INFO  TaskSchedulerImpl - Removed TaskSet 273.0, whose tasks have all completed, from pool 
14:51:57.954 INFO  DAGScheduler - ResultStage 273 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.006 s
14:51:57.954 INFO  DAGScheduler - Job 209 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:57.954 INFO  TaskSchedulerImpl - Killing all running tasks in stage 273: Stage finished
14:51:57.954 INFO  DAGScheduler - Job 209 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.007126 s
14:51:57.955 INFO  FileInputFormat - Total input files to process : 1
14:51:57.981 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
14:51:57.981 INFO  DAGScheduler - Got job 210 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
14:51:57.981 INFO  DAGScheduler - Final stage: ResultStage 274 (collect at ReadsSparkSourceUnitTest.java:112)
14:51:57.981 INFO  DAGScheduler - Parents of final stage: List()
14:51:57.981 INFO  DAGScheduler - Missing parents: List()
14:51:57.981 INFO  DAGScheduler - Submitting ResultStage 274 (MapPartitionsRDD[1325] at filter at ReadsSparkSource.java:96), which has no missing parents
14:51:57.993 INFO  MemoryStore - Block broadcast_559 stored as values in memory (estimated size 286.8 KiB, free 1918.3 MiB)
14:51:57.994 INFO  MemoryStore - Block broadcast_559_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.2 MiB)
14:51:57.994 INFO  BlockManagerInfo - Added broadcast_559_piece0 in memory on localhost:44923 (size: 103.6 KiB, free: 1919.6 MiB)
14:51:57.994 INFO  SparkContext - Created broadcast 559 from broadcast at DAGScheduler.scala:1580
14:51:57.994 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 274 (MapPartitionsRDD[1325] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:51:57.994 INFO  TaskSchedulerImpl - Adding task set 274.0 with 1 tasks resource profile 0
14:51:57.995 INFO  TaskSetManager - Starting task 0.0 in stage 274.0 (TID 345) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7851 bytes) 
14:51:57.995 INFO  Executor - Running task 0.0 in stage 274.0 (TID 345)
14:51:58.018 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.cram:0+13330
14:51:58.023 INFO  Executor - Finished task 0.0 in stage 274.0 (TID 345). 3736 bytes result sent to driver
14:51:58.023 INFO  TaskSetManager - Finished task 0.0 in stage 274.0 (TID 345) in 28 ms on localhost (executor driver) (1/1)
14:51:58.023 INFO  TaskSchedulerImpl - Removed TaskSet 274.0, whose tasks have all completed, from pool 
14:51:58.024 INFO  DAGScheduler - ResultStage 274 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.042 s
14:51:58.024 INFO  DAGScheduler - Job 210 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:58.024 INFO  TaskSchedulerImpl - Killing all running tasks in stage 274: Stage finished
14:51:58.024 INFO  DAGScheduler - Job 210 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.042947 s
14:51:58.027 INFO  MemoryStore - Block broadcast_560 stored as values in memory (estimated size 297.9 KiB, free 1917.9 MiB)
14:51:58.033 INFO  MemoryStore - Block broadcast_560_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.9 MiB)
14:51:58.033 INFO  BlockManagerInfo - Added broadcast_560_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.6 MiB)
14:51:58.034 INFO  SparkContext - Created broadcast 560 from newAPIHadoopFile at PathSplitSource.java:96
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
14:51:58.060 INFO  MemoryStore - Block broadcast_561 stored as values in memory (estimated size 297.9 KiB, free 1917.6 MiB)
14:51:58.069 INFO  MemoryStore - Block broadcast_561_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.5 MiB)
14:51:58.069 INFO  BlockManagerInfo - Added broadcast_561_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.6 MiB)
14:51:58.069 INFO  SparkContext - Created broadcast 561 from newAPIHadoopFile at PathSplitSource.java:96
14:51:58.094 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
14:51:58.094 INFO  DAGScheduler - Got job 211 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
14:51:58.094 INFO  DAGScheduler - Final stage: ResultStage 275 (collect at ReadsSparkSourceUnitTest.java:111)
14:51:58.094 INFO  DAGScheduler - Parents of final stage: List()
14:51:58.094 INFO  DAGScheduler - Missing parents: List()
14:51:58.094 INFO  DAGScheduler - Submitting ResultStage 275 (ParallelCollectionRDD[1331] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
14:51:58.095 INFO  MemoryStore - Block broadcast_562 stored as values in memory (estimated size 3.0 KiB, free 1917.5 MiB)
14:51:58.095 INFO  MemoryStore - Block broadcast_562_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1917.5 MiB)
14:51:58.095 INFO  BlockManagerInfo - Added broadcast_562_piece0 in memory on localhost:44923 (size: 1778.0 B, free: 1919.5 MiB)
14:51:58.096 INFO  SparkContext - Created broadcast 562 from broadcast at DAGScheduler.scala:1580
14:51:58.096 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 275 (ParallelCollectionRDD[1331] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
14:51:58.096 INFO  TaskSchedulerImpl - Adding task set 275.0 with 4 tasks resource profile 0
14:51:58.096 INFO  TaskSetManager - Starting task 0.0 in stage 275.0 (TID 346) (localhost, executor driver, partition 0, PROCESS_LOCAL, 47080 bytes) 
14:51:58.097 INFO  TaskSetManager - Starting task 1.0 in stage 275.0 (TID 347) (localhost, executor driver, partition 1, PROCESS_LOCAL, 47194 bytes) 
14:51:58.097 INFO  TaskSetManager - Starting task 2.0 in stage 275.0 (TID 348) (localhost, executor driver, partition 2, PROCESS_LOCAL, 47105 bytes) 
14:51:58.097 INFO  TaskSetManager - Starting task 3.0 in stage 275.0 (TID 349) (localhost, executor driver, partition 3, PROCESS_LOCAL, 47530 bytes) 
14:51:58.097 INFO  Executor - Running task 0.0 in stage 275.0 (TID 346)
14:51:58.097 INFO  Executor - Running task 1.0 in stage 275.0 (TID 347)
14:51:58.097 INFO  Executor - Running task 3.0 in stage 275.0 (TID 349)
14:51:58.097 INFO  Executor - Running task 2.0 in stage 275.0 (TID 348)
14:51:58.099 INFO  Executor - Finished task 2.0 in stage 275.0 (TID 348). 40337 bytes result sent to driver
14:51:58.099 INFO  Executor - Finished task 3.0 in stage 275.0 (TID 349). 40762 bytes result sent to driver
14:51:58.099 INFO  Executor - Finished task 0.0 in stage 275.0 (TID 346). 40312 bytes result sent to driver
14:51:58.099 INFO  Executor - Finished task 1.0 in stage 275.0 (TID 347). 40426 bytes result sent to driver
14:51:58.100 INFO  TaskSetManager - Finished task 2.0 in stage 275.0 (TID 348) in 3 ms on localhost (executor driver) (1/4)
14:51:58.100 INFO  TaskSetManager - Finished task 3.0 in stage 275.0 (TID 349) in 3 ms on localhost (executor driver) (2/4)
14:51:58.100 INFO  TaskSetManager - Finished task 1.0 in stage 275.0 (TID 347) in 4 ms on localhost (executor driver) (3/4)
14:51:58.100 INFO  TaskSetManager - Finished task 0.0 in stage 275.0 (TID 346) in 4 ms on localhost (executor driver) (4/4)
14:51:58.100 INFO  TaskSchedulerImpl - Removed TaskSet 275.0, whose tasks have all completed, from pool 
14:51:58.100 INFO  DAGScheduler - ResultStage 275 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.005 s
14:51:58.100 INFO  DAGScheduler - Job 211 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:58.100 INFO  TaskSchedulerImpl - Killing all running tasks in stage 275: Stage finished
14:51:58.100 INFO  DAGScheduler - Job 211 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.006451 s
14:51:58.101 INFO  FileInputFormat - Total input files to process : 1
14:51:58.138 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
14:51:58.138 INFO  DAGScheduler - Got job 212 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
14:51:58.138 INFO  DAGScheduler - Final stage: ResultStage 276 (collect at ReadsSparkSourceUnitTest.java:112)
14:51:58.138 INFO  DAGScheduler - Parents of final stage: List()
14:51:58.138 INFO  DAGScheduler - Missing parents: List()
14:51:58.138 INFO  DAGScheduler - Submitting ResultStage 276 (MapPartitionsRDD[1338] at filter at ReadsSparkSource.java:96), which has no missing parents
14:51:58.155 INFO  MemoryStore - Block broadcast_563 stored as values in memory (estimated size 426.2 KiB, free 1917.1 MiB)
14:51:58.156 INFO  MemoryStore - Block broadcast_563_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1917.0 MiB)
14:51:58.157 INFO  BlockManagerInfo - Added broadcast_563_piece0 in memory on localhost:44923 (size: 153.7 KiB, free: 1919.4 MiB)
14:51:58.157 INFO  SparkContext - Created broadcast 563 from broadcast at DAGScheduler.scala:1580
14:51:58.157 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 276 (MapPartitionsRDD[1338] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:51:58.157 INFO  TaskSchedulerImpl - Adding task set 276.0 with 1 tasks resource profile 0
14:51:58.158 INFO  TaskSetManager - Starting task 0.0 in stage 276.0 (TID 350) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes) 
14:51:58.158 INFO  Executor - Running task 0.0 in stage 276.0 (TID 350)
14:51:58.190 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
14:51:58.195 INFO  Executor - Finished task 0.0 in stage 276.0 (TID 350). 159572 bytes result sent to driver
14:51:58.195 INFO  TaskSetManager - Finished task 0.0 in stage 276.0 (TID 350) in 38 ms on localhost (executor driver) (1/1)
14:51:58.196 INFO  TaskSchedulerImpl - Removed TaskSet 276.0, whose tasks have all completed, from pool 
14:51:58.196 INFO  DAGScheduler - ResultStage 276 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.057 s
14:51:58.196 INFO  DAGScheduler - Job 212 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:58.196 INFO  TaskSchedulerImpl - Killing all running tasks in stage 276: Stage finished
14:51:58.196 INFO  DAGScheduler - Job 212 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.058103 s
14:51:58.198 INFO  MemoryStore - Block broadcast_564 stored as values in memory (estimated size 536.0 B, free 1917.0 MiB)
14:51:58.203 INFO  MemoryStore - Block broadcast_564_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.0 MiB)
14:51:58.203 INFO  BlockManagerInfo - Added broadcast_564_piece0 in memory on localhost:44923 (size: 187.0 B, free: 1919.4 MiB)
14:51:58.203 INFO  BlockManagerInfo - Removed broadcast_551_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.4 MiB)
14:51:58.203 INFO  SparkContext - Created broadcast 564 from broadcast at CramSource.java:114
14:51:58.203 INFO  BlockManagerInfo - Removed broadcast_557_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.5 MiB)
14:51:58.204 INFO  BlockManagerInfo - Removed broadcast_560_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.5 MiB)
14:51:58.204 INFO  BlockManagerInfo - Removed broadcast_558_piece0 on localhost:44923 in memory (size: 1778.0 B, free: 1919.5 MiB)
14:51:58.204 INFO  MemoryStore - Block broadcast_565 stored as values in memory (estimated size 297.9 KiB, free 1917.7 MiB)
14:51:58.204 INFO  BlockManagerInfo - Removed broadcast_556_piece0 on localhost:44923 in memory (size: 206.0 B, free: 1919.5 MiB)
14:51:58.205 INFO  BlockManagerInfo - Removed broadcast_553_piece0 on localhost:44923 in memory (size: 103.6 KiB, free: 1919.6 MiB)
14:51:58.205 INFO  BlockManagerInfo - Removed broadcast_562_piece0 on localhost:44923 in memory (size: 1778.0 B, free: 1919.7 MiB)
14:51:58.205 INFO  BlockManagerInfo - Removed broadcast_550_piece0 on localhost:44923 in memory (size: 187.0 B, free: 1919.7 MiB)
14:51:58.206 INFO  BlockManagerInfo - Removed broadcast_559_piece0 on localhost:44923 in memory (size: 103.6 KiB, free: 1919.8 MiB)
14:51:58.206 INFO  BlockManagerInfo - Removed broadcast_555_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.8 MiB)
14:51:58.206 INFO  BlockManagerInfo - Removed broadcast_563_piece0 on localhost:44923 in memory (size: 153.7 KiB, free: 1920.0 MiB)
14:51:58.207 INFO  BlockManagerInfo - Removed broadcast_554_piece0 on localhost:44923 in memory (size: 206.0 B, free: 1920.0 MiB)
14:51:58.207 INFO  BlockManagerInfo - Removed broadcast_561_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1920.0 MiB)
14:51:58.212 INFO  MemoryStore - Block broadcast_565_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
14:51:58.212 INFO  BlockManagerInfo - Added broadcast_565_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1920.0 MiB)
14:51:58.212 INFO  SparkContext - Created broadcast 565 from newAPIHadoopFile at PathSplitSource.java:96
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
14:51:58.235 INFO  MemoryStore - Block broadcast_566 stored as values in memory (estimated size 536.0 B, free 1919.7 MiB)
14:51:58.235 INFO  MemoryStore - Block broadcast_566_piece0 stored as bytes in memory (estimated size 187.0 B, free 1919.7 MiB)
14:51:58.235 INFO  BlockManagerInfo - Added broadcast_566_piece0 in memory on localhost:44923 (size: 187.0 B, free: 1920.0 MiB)
14:51:58.235 INFO  SparkContext - Created broadcast 566 from broadcast at CramSource.java:114
14:51:58.236 INFO  MemoryStore - Block broadcast_567 stored as values in memory (estimated size 297.9 KiB, free 1919.4 MiB)
14:51:58.242 INFO  MemoryStore - Block broadcast_567_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.3 MiB)
14:51:58.243 INFO  BlockManagerInfo - Added broadcast_567_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.9 MiB)
14:51:58.243 INFO  SparkContext - Created broadcast 567 from newAPIHadoopFile at PathSplitSource.java:96
14:51:58.260 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
14:51:58.260 INFO  DAGScheduler - Got job 213 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
14:51:58.260 INFO  DAGScheduler - Final stage: ResultStage 277 (collect at ReadsSparkSourceUnitTest.java:111)
14:51:58.260 INFO  DAGScheduler - Parents of final stage: List()
14:51:58.260 INFO  DAGScheduler - Missing parents: List()
14:51:58.260 INFO  DAGScheduler - Submitting ResultStage 277 (ParallelCollectionRDD[1343] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
14:51:58.261 INFO  MemoryStore - Block broadcast_568 stored as values in memory (estimated size 3.0 KiB, free 1919.3 MiB)
14:51:58.261 INFO  MemoryStore - Block broadcast_568_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1919.3 MiB)
14:51:58.261 INFO  BlockManagerInfo - Added broadcast_568_piece0 in memory on localhost:44923 (size: 1778.0 B, free: 1919.9 MiB)
14:51:58.262 INFO  SparkContext - Created broadcast 568 from broadcast at DAGScheduler.scala:1580
14:51:58.262 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 277 (ParallelCollectionRDD[1343] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
14:51:58.262 INFO  TaskSchedulerImpl - Adding task set 277.0 with 4 tasks resource profile 0
14:51:58.262 INFO  TaskSetManager - Starting task 0.0 in stage 277.0 (TID 351) (localhost, executor driver, partition 0, PROCESS_LOCAL, 45675 bytes) 
14:51:58.263 INFO  TaskSetManager - Starting task 1.0 in stage 277.0 (TID 352) (localhost, executor driver, partition 1, PROCESS_LOCAL, 45855 bytes) 
14:51:58.263 INFO  TaskSetManager - Starting task 2.0 in stage 277.0 (TID 353) (localhost, executor driver, partition 2, PROCESS_LOCAL, 45795 bytes) 
14:51:58.264 INFO  TaskSetManager - Starting task 3.0 in stage 277.0 (TID 354) (localhost, executor driver, partition 3, PROCESS_LOCAL, 46068 bytes) 
14:51:58.264 INFO  Executor - Running task 0.0 in stage 277.0 (TID 351)
14:51:58.264 INFO  Executor - Running task 1.0 in stage 277.0 (TID 352)
14:51:58.264 INFO  Executor - Running task 3.0 in stage 277.0 (TID 354)
14:51:58.264 INFO  Executor - Running task 2.0 in stage 277.0 (TID 353)
14:51:58.265 INFO  Executor - Finished task 3.0 in stage 277.0 (TID 354). 39300 bytes result sent to driver
14:51:58.265 INFO  Executor - Finished task 0.0 in stage 277.0 (TID 351). 38907 bytes result sent to driver
14:51:58.266 INFO  Executor - Finished task 2.0 in stage 277.0 (TID 353). 39070 bytes result sent to driver
14:51:58.266 INFO  Executor - Finished task 1.0 in stage 277.0 (TID 352). 39130 bytes result sent to driver
14:51:58.267 INFO  TaskSetManager - Finished task 2.0 in stage 277.0 (TID 353) in 3 ms on localhost (executor driver) (1/4)
14:51:58.267 INFO  TaskSetManager - Finished task 1.0 in stage 277.0 (TID 352) in 4 ms on localhost (executor driver) (2/4)
14:51:58.267 INFO  TaskSetManager - Finished task 3.0 in stage 277.0 (TID 354) in 4 ms on localhost (executor driver) (3/4)
14:51:58.267 INFO  TaskSetManager - Finished task 0.0 in stage 277.0 (TID 351) in 5 ms on localhost (executor driver) (4/4)
14:51:58.267 INFO  TaskSchedulerImpl - Removed TaskSet 277.0, whose tasks have all completed, from pool 
14:51:58.267 INFO  DAGScheduler - ResultStage 277 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.006 s
14:51:58.267 INFO  DAGScheduler - Job 213 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:58.267 INFO  TaskSchedulerImpl - Killing all running tasks in stage 277: Stage finished
14:51:58.267 INFO  DAGScheduler - Job 213 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.007179 s
14:51:58.268 INFO  FileInputFormat - Total input files to process : 1
14:51:58.293 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
14:51:58.294 INFO  DAGScheduler - Got job 214 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
14:51:58.294 INFO  DAGScheduler - Final stage: ResultStage 278 (collect at ReadsSparkSourceUnitTest.java:112)
14:51:58.294 INFO  DAGScheduler - Parents of final stage: List()
14:51:58.294 INFO  DAGScheduler - Missing parents: List()
14:51:58.294 INFO  DAGScheduler - Submitting ResultStage 278 (MapPartitionsRDD[1349] at filter at ReadsSparkSource.java:96), which has no missing parents
14:51:58.305 INFO  MemoryStore - Block broadcast_569 stored as values in memory (estimated size 286.8 KiB, free 1919.0 MiB)
14:51:58.306 INFO  MemoryStore - Block broadcast_569_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.9 MiB)
14:51:58.306 INFO  BlockManagerInfo - Added broadcast_569_piece0 in memory on localhost:44923 (size: 103.6 KiB, free: 1919.8 MiB)
14:51:58.307 INFO  SparkContext - Created broadcast 569 from broadcast at DAGScheduler.scala:1580
14:51:58.307 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 278 (MapPartitionsRDD[1349] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:51:58.307 INFO  TaskSchedulerImpl - Adding task set 278.0 with 1 tasks resource profile 0
14:51:58.307 INFO  TaskSetManager - Starting task 0.0 in stage 278.0 (TID 355) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes) 
14:51:58.308 INFO  Executor - Running task 0.0 in stage 278.0 (TID 355)
14:51:58.330 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
14:51:58.339 INFO  Executor - Finished task 0.0 in stage 278.0 (TID 355). 154101 bytes result sent to driver
14:51:58.340 INFO  TaskSetManager - Finished task 0.0 in stage 278.0 (TID 355) in 33 ms on localhost (executor driver) (1/1)
14:51:58.340 INFO  TaskSchedulerImpl - Removed TaskSet 278.0, whose tasks have all completed, from pool 
14:51:58.340 INFO  DAGScheduler - ResultStage 278 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.046 s
14:51:58.340 INFO  DAGScheduler - Job 214 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:58.340 INFO  TaskSchedulerImpl - Killing all running tasks in stage 278: Stage finished
14:51:58.340 INFO  DAGScheduler - Job 214 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.046865 s
14:51:58.343 INFO  MemoryStore - Block broadcast_570 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
14:51:58.354 INFO  MemoryStore - Block broadcast_570_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
14:51:58.354 INFO  BlockManagerInfo - Added broadcast_570_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.7 MiB)
14:51:58.354 INFO  SparkContext - Created broadcast 570 from newAPIHadoopFile at PathSplitSource.java:96
14:51:58.377 INFO  MemoryStore - Block broadcast_571 stored as values in memory (estimated size 297.9 KiB, free 1918.3 MiB)
14:51:58.383 INFO  MemoryStore - Block broadcast_571_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.3 MiB)
14:51:58.383 INFO  BlockManagerInfo - Added broadcast_571_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.7 MiB)
14:51:58.383 INFO  SparkContext - Created broadcast 571 from newAPIHadoopFile at PathSplitSource.java:96
14:51:58.406 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
14:51:58.406 INFO  DAGScheduler - Got job 215 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
14:51:58.406 INFO  DAGScheduler - Final stage: ResultStage 279 (collect at ReadsSparkSourceUnitTest.java:111)
14:51:58.406 INFO  DAGScheduler - Parents of final stage: List()
14:51:58.406 INFO  DAGScheduler - Missing parents: List()
14:51:58.406 INFO  DAGScheduler - Submitting ResultStage 279 (ParallelCollectionRDD[1355] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
14:51:58.407 INFO  MemoryStore - Block broadcast_572 stored as values in memory (estimated size 3.0 KiB, free 1918.2 MiB)
14:51:58.407 INFO  MemoryStore - Block broadcast_572_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.2 MiB)
14:51:58.407 INFO  BlockManagerInfo - Added broadcast_572_piece0 in memory on localhost:44923 (size: 1778.0 B, free: 1919.7 MiB)
14:51:58.408 INFO  SparkContext - Created broadcast 572 from broadcast at DAGScheduler.scala:1580
14:51:58.408 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 279 (ParallelCollectionRDD[1355] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
14:51:58.408 INFO  TaskSchedulerImpl - Adding task set 279.0 with 4 tasks resource profile 0
14:51:58.408 INFO  TaskSetManager - Starting task 0.0 in stage 279.0 (TID 356) (localhost, executor driver, partition 0, PROCESS_LOCAL, 47080 bytes) 
14:51:58.409 INFO  TaskSetManager - Starting task 1.0 in stage 279.0 (TID 357) (localhost, executor driver, partition 1, PROCESS_LOCAL, 47194 bytes) 
14:51:58.409 INFO  TaskSetManager - Starting task 2.0 in stage 279.0 (TID 358) (localhost, executor driver, partition 2, PROCESS_LOCAL, 47105 bytes) 
14:51:58.409 INFO  TaskSetManager - Starting task 3.0 in stage 279.0 (TID 359) (localhost, executor driver, partition 3, PROCESS_LOCAL, 47530 bytes) 
14:51:58.409 INFO  Executor - Running task 0.0 in stage 279.0 (TID 356)
14:51:58.409 INFO  Executor - Running task 2.0 in stage 279.0 (TID 358)
14:51:58.409 INFO  Executor - Running task 1.0 in stage 279.0 (TID 357)
14:51:58.410 INFO  Executor - Running task 3.0 in stage 279.0 (TID 359)
14:51:58.411 INFO  Executor - Finished task 2.0 in stage 279.0 (TID 358). 40337 bytes result sent to driver
14:51:58.411 INFO  Executor - Finished task 1.0 in stage 279.0 (TID 357). 40469 bytes result sent to driver
14:51:58.411 INFO  Executor - Finished task 3.0 in stage 279.0 (TID 359). 40762 bytes result sent to driver
14:51:58.411 INFO  Executor - Finished task 0.0 in stage 279.0 (TID 356). 40355 bytes result sent to driver
14:51:58.412 INFO  TaskSetManager - Finished task 0.0 in stage 279.0 (TID 356) in 4 ms on localhost (executor driver) (1/4)
14:51:58.412 INFO  TaskSetManager - Finished task 2.0 in stage 279.0 (TID 358) in 3 ms on localhost (executor driver) (2/4)
14:51:58.412 INFO  TaskSetManager - Finished task 1.0 in stage 279.0 (TID 357) in 4 ms on localhost (executor driver) (3/4)
14:51:58.412 INFO  TaskSetManager - Finished task 3.0 in stage 279.0 (TID 359) in 3 ms on localhost (executor driver) (4/4)
14:51:58.412 INFO  TaskSchedulerImpl - Removed TaskSet 279.0, whose tasks have all completed, from pool 
14:51:58.412 INFO  DAGScheduler - ResultStage 279 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.005 s
14:51:58.412 INFO  DAGScheduler - Job 215 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:58.412 INFO  TaskSchedulerImpl - Killing all running tasks in stage 279: Stage finished
14:51:58.412 INFO  DAGScheduler - Job 215 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.006407 s
14:51:58.413 INFO  FileInputFormat - Total input files to process : 1
14:51:58.450 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
14:51:58.450 INFO  DAGScheduler - Got job 216 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
14:51:58.450 INFO  DAGScheduler - Final stage: ResultStage 280 (collect at ReadsSparkSourceUnitTest.java:112)
14:51:58.450 INFO  DAGScheduler - Parents of final stage: List()
14:51:58.450 INFO  DAGScheduler - Missing parents: List()
14:51:58.450 INFO  DAGScheduler - Submitting ResultStage 280 (MapPartitionsRDD[1362] at filter at ReadsSparkSource.java:96), which has no missing parents
14:51:58.467 INFO  MemoryStore - Block broadcast_573 stored as values in memory (estimated size 426.2 KiB, free 1917.8 MiB)
14:51:58.468 INFO  MemoryStore - Block broadcast_573_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1917.7 MiB)
14:51:58.468 INFO  BlockManagerInfo - Added broadcast_573_piece0 in memory on localhost:44923 (size: 153.7 KiB, free: 1919.5 MiB)
14:51:58.468 INFO  SparkContext - Created broadcast 573 from broadcast at DAGScheduler.scala:1580
14:51:58.469 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 280 (MapPartitionsRDD[1362] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:51:58.469 INFO  TaskSchedulerImpl - Adding task set 280.0 with 1 tasks resource profile 0
14:51:58.469 INFO  TaskSetManager - Starting task 0.0 in stage 280.0 (TID 360) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes) 
14:51:58.469 INFO  Executor - Running task 0.0 in stage 280.0 (TID 360)
14:51:58.503 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
14:51:58.507 INFO  Executor - Finished task 0.0 in stage 280.0 (TID 360). 159572 bytes result sent to driver
14:51:58.508 INFO  TaskSetManager - Finished task 0.0 in stage 280.0 (TID 360) in 39 ms on localhost (executor driver) (1/1)
14:51:58.508 INFO  TaskSchedulerImpl - Removed TaskSet 280.0, whose tasks have all completed, from pool 
14:51:58.508 INFO  DAGScheduler - ResultStage 280 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.058 s
14:51:58.508 INFO  DAGScheduler - Job 216 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:58.508 INFO  TaskSchedulerImpl - Killing all running tasks in stage 280: Stage finished
14:51:58.508 INFO  DAGScheduler - Job 216 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.058500 s
14:51:58.510 INFO  MemoryStore - Block broadcast_574 stored as values in memory (estimated size 536.0 B, free 1917.7 MiB)
14:51:58.515 INFO  MemoryStore - Block broadcast_574_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.7 MiB)
14:51:58.515 INFO  BlockManagerInfo - Added broadcast_574_piece0 in memory on localhost:44923 (size: 187.0 B, free: 1919.5 MiB)
14:51:58.515 INFO  BlockManagerInfo - Removed broadcast_564_piece0 on localhost:44923 in memory (size: 187.0 B, free: 1919.5 MiB)
14:51:58.515 INFO  SparkContext - Created broadcast 574 from broadcast at CramSource.java:114
14:51:58.516 INFO  BlockManagerInfo - Removed broadcast_567_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.6 MiB)
14:51:58.516 INFO  BlockManagerInfo - Removed broadcast_570_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.6 MiB)
14:51:58.516 INFO  BlockManagerInfo - Removed broadcast_566_piece0 on localhost:44923 in memory (size: 187.0 B, free: 1919.6 MiB)
14:51:58.517 INFO  MemoryStore - Block broadcast_575 stored as values in memory (estimated size 297.9 KiB, free 1918.1 MiB)
14:51:58.517 INFO  BlockManagerInfo - Removed broadcast_569_piece0 on localhost:44923 in memory (size: 103.6 KiB, free: 1919.7 MiB)
14:51:58.517 INFO  BlockManagerInfo - Removed broadcast_571_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.8 MiB)
14:51:58.517 INFO  BlockManagerInfo - Removed broadcast_572_piece0 on localhost:44923 in memory (size: 1778.0 B, free: 1919.8 MiB)
14:51:58.518 INFO  BlockManagerInfo - Removed broadcast_568_piece0 on localhost:44923 in memory (size: 1778.0 B, free: 1919.8 MiB)
14:51:58.518 INFO  BlockManagerInfo - Removed broadcast_573_piece0 on localhost:44923 in memory (size: 153.7 KiB, free: 1920.0 MiB)
14:51:58.519 INFO  BlockManagerInfo - Removed broadcast_565_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1920.0 MiB)
14:51:58.524 INFO  MemoryStore - Block broadcast_575_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
14:51:58.524 INFO  BlockManagerInfo - Added broadcast_575_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1920.0 MiB)
14:51:58.524 INFO  SparkContext - Created broadcast 575 from newAPIHadoopFile at PathSplitSource.java:96
14:51:58.545 INFO  MemoryStore - Block broadcast_576 stored as values in memory (estimated size 536.0 B, free 1919.7 MiB)
14:51:58.545 INFO  MemoryStore - Block broadcast_576_piece0 stored as bytes in memory (estimated size 187.0 B, free 1919.7 MiB)
14:51:58.546 INFO  BlockManagerInfo - Added broadcast_576_piece0 in memory on localhost:44923 (size: 187.0 B, free: 1920.0 MiB)
14:51:58.546 INFO  SparkContext - Created broadcast 576 from broadcast at CramSource.java:114
14:51:58.547 INFO  MemoryStore - Block broadcast_577 stored as values in memory (estimated size 297.9 KiB, free 1919.4 MiB)
14:51:58.553 INFO  MemoryStore - Block broadcast_577_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.3 MiB)
14:51:58.553 INFO  BlockManagerInfo - Added broadcast_577_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.9 MiB)
14:51:58.553 INFO  SparkContext - Created broadcast 577 from newAPIHadoopFile at PathSplitSource.java:96
14:51:58.571 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
14:51:58.571 INFO  DAGScheduler - Got job 217 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
14:51:58.571 INFO  DAGScheduler - Final stage: ResultStage 281 (collect at ReadsSparkSourceUnitTest.java:111)
14:51:58.571 INFO  DAGScheduler - Parents of final stage: List()
14:51:58.571 INFO  DAGScheduler - Missing parents: List()
14:51:58.571 INFO  DAGScheduler - Submitting ResultStage 281 (ParallelCollectionRDD[1367] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
14:51:58.571 INFO  MemoryStore - Block broadcast_578 stored as values in memory (estimated size 3.0 KiB, free 1919.3 MiB)
14:51:58.572 INFO  MemoryStore - Block broadcast_578_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1919.3 MiB)
14:51:58.572 INFO  BlockManagerInfo - Added broadcast_578_piece0 in memory on localhost:44923 (size: 1778.0 B, free: 1919.9 MiB)
14:51:58.572 INFO  SparkContext - Created broadcast 578 from broadcast at DAGScheduler.scala:1580
14:51:58.572 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 281 (ParallelCollectionRDD[1367] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
14:51:58.572 INFO  TaskSchedulerImpl - Adding task set 281.0 with 4 tasks resource profile 0
14:51:58.573 INFO  TaskSetManager - Starting task 0.0 in stage 281.0 (TID 361) (localhost, executor driver, partition 0, PROCESS_LOCAL, 45675 bytes) 
14:51:58.574 INFO  TaskSetManager - Starting task 1.0 in stage 281.0 (TID 362) (localhost, executor driver, partition 1, PROCESS_LOCAL, 45855 bytes) 
14:51:58.574 INFO  TaskSetManager - Starting task 2.0 in stage 281.0 (TID 363) (localhost, executor driver, partition 2, PROCESS_LOCAL, 45795 bytes) 
14:51:58.574 INFO  TaskSetManager - Starting task 3.0 in stage 281.0 (TID 364) (localhost, executor driver, partition 3, PROCESS_LOCAL, 46068 bytes) 
14:51:58.575 INFO  Executor - Running task 0.0 in stage 281.0 (TID 361)
14:51:58.575 INFO  Executor - Running task 1.0 in stage 281.0 (TID 362)
14:51:58.575 INFO  Executor - Running task 2.0 in stage 281.0 (TID 363)
14:51:58.575 INFO  Executor - Running task 3.0 in stage 281.0 (TID 364)
14:51:58.576 INFO  Executor - Finished task 2.0 in stage 281.0 (TID 363). 39027 bytes result sent to driver
14:51:58.576 INFO  Executor - Finished task 1.0 in stage 281.0 (TID 362). 39087 bytes result sent to driver
14:51:58.576 INFO  Executor - Finished task 3.0 in stage 281.0 (TID 364). 39300 bytes result sent to driver
14:51:58.577 INFO  TaskSetManager - Finished task 2.0 in stage 281.0 (TID 363) in 3 ms on localhost (executor driver) (1/4)
14:51:58.577 INFO  Executor - Finished task 0.0 in stage 281.0 (TID 361). 38950 bytes result sent to driver
14:51:58.577 INFO  TaskSetManager - Finished task 1.0 in stage 281.0 (TID 362) in 4 ms on localhost (executor driver) (2/4)
14:51:58.578 INFO  TaskSetManager - Finished task 0.0 in stage 281.0 (TID 361) in 5 ms on localhost (executor driver) (3/4)
14:51:58.578 INFO  TaskSetManager - Finished task 3.0 in stage 281.0 (TID 364) in 4 ms on localhost (executor driver) (4/4)
14:51:58.578 INFO  TaskSchedulerImpl - Removed TaskSet 281.0, whose tasks have all completed, from pool 
14:51:58.578 INFO  DAGScheduler - ResultStage 281 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.007 s
14:51:58.578 INFO  DAGScheduler - Job 217 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:58.578 INFO  TaskSchedulerImpl - Killing all running tasks in stage 281: Stage finished
14:51:58.578 INFO  DAGScheduler - Job 217 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.007644 s
14:51:58.579 INFO  FileInputFormat - Total input files to process : 1
14:51:58.606 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
14:51:58.606 INFO  DAGScheduler - Got job 218 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
14:51:58.606 INFO  DAGScheduler - Final stage: ResultStage 282 (collect at ReadsSparkSourceUnitTest.java:112)
14:51:58.606 INFO  DAGScheduler - Parents of final stage: List()
14:51:58.606 INFO  DAGScheduler - Missing parents: List()
14:51:58.606 INFO  DAGScheduler - Submitting ResultStage 282 (MapPartitionsRDD[1373] at filter at ReadsSparkSource.java:96), which has no missing parents
14:51:58.617 INFO  MemoryStore - Block broadcast_579 stored as values in memory (estimated size 286.8 KiB, free 1919.0 MiB)
14:51:58.618 INFO  MemoryStore - Block broadcast_579_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.9 MiB)
14:51:58.619 INFO  BlockManagerInfo - Added broadcast_579_piece0 in memory on localhost:44923 (size: 103.6 KiB, free: 1919.8 MiB)
14:51:58.619 INFO  SparkContext - Created broadcast 579 from broadcast at DAGScheduler.scala:1580
14:51:58.619 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 282 (MapPartitionsRDD[1373] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:51:58.619 INFO  TaskSchedulerImpl - Adding task set 282.0 with 1 tasks resource profile 0
14:51:58.620 INFO  TaskSetManager - Starting task 0.0 in stage 282.0 (TID 365) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes) 
14:51:58.620 INFO  Executor - Running task 0.0 in stage 282.0 (TID 365)
14:51:58.644 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
14:51:58.654 INFO  Executor - Finished task 0.0 in stage 282.0 (TID 365). 154101 bytes result sent to driver
14:51:58.654 INFO  TaskSetManager - Finished task 0.0 in stage 282.0 (TID 365) in 35 ms on localhost (executor driver) (1/1)
14:51:58.654 INFO  TaskSchedulerImpl - Removed TaskSet 282.0, whose tasks have all completed, from pool 
14:51:58.655 INFO  DAGScheduler - ResultStage 282 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.048 s
14:51:58.655 INFO  DAGScheduler - Job 218 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:58.655 INFO  TaskSchedulerImpl - Killing all running tasks in stage 282: Stage finished
14:51:58.655 INFO  DAGScheduler - Job 218 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.049129 s
14:51:58.658 INFO  MemoryStore - Block broadcast_580 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
14:51:58.668 INFO  MemoryStore - Block broadcast_580_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
14:51:58.669 INFO  BlockManagerInfo - Added broadcast_580_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.7 MiB)
14:51:58.669 INFO  SparkContext - Created broadcast 580 from newAPIHadoopFile at PathSplitSource.java:96
14:51:58.697 INFO  MemoryStore - Block broadcast_581 stored as values in memory (estimated size 536.0 B, free 1918.6 MiB)
14:51:58.698 INFO  MemoryStore - Block broadcast_581_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.6 MiB)
14:51:58.698 INFO  BlockManagerInfo - Added broadcast_581_piece0 in memory on localhost:44923 (size: 187.0 B, free: 1919.7 MiB)
14:51:58.698 INFO  SparkContext - Created broadcast 581 from broadcast at CramSource.java:114
14:51:58.699 INFO  MemoryStore - Block broadcast_582 stored as values in memory (estimated size 297.9 KiB, free 1918.3 MiB)
14:51:58.705 INFO  MemoryStore - Block broadcast_582_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.3 MiB)
14:51:58.706 INFO  BlockManagerInfo - Added broadcast_582_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.7 MiB)
14:51:58.706 INFO  SparkContext - Created broadcast 582 from newAPIHadoopFile at PathSplitSource.java:96
14:51:58.731 WARN  FileSystem - Failed to initialize filesystem hdfs://bogus/path.bam: java.lang.IllegalArgumentException: java.net.UnknownHostException: bogus
14:51:58.735 INFO  MemoryStore - Block broadcast_583 stored as values in memory (estimated size 298.0 KiB, free 1918.0 MiB)
14:51:58.746 INFO  MemoryStore - Block broadcast_583_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.9 MiB)
14:51:58.746 INFO  BlockManagerInfo - Added broadcast_583_piece0 in memory on localhost:44923 (size: 50.3 KiB, free: 1919.7 MiB)
14:51:58.747 INFO  SparkContext - Created broadcast 583 from newAPIHadoopFile at PathSplitSource.java:96
14:51:58.773 INFO  MemoryStore - Block broadcast_584 stored as values in memory (estimated size 298.0 KiB, free 1917.6 MiB)
14:51:58.779 INFO  MemoryStore - Block broadcast_584_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.6 MiB)
14:51:58.779 INFO  BlockManagerInfo - Added broadcast_584_piece0 in memory on localhost:44923 (size: 50.3 KiB, free: 1919.6 MiB)
14:51:58.779 INFO  SparkContext - Created broadcast 584 from newAPIHadoopFile at PathSplitSource.java:96
14:51:58.803 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:154
14:51:58.803 INFO  DAGScheduler - Got job 219 (collect at ReadsSparkSourceUnitTest.java:154) with 4 output partitions
14:51:58.803 INFO  DAGScheduler - Final stage: ResultStage 283 (collect at ReadsSparkSourceUnitTest.java:154)
14:51:58.803 INFO  DAGScheduler - Parents of final stage: List()
14:51:58.803 INFO  DAGScheduler - Missing parents: List()
14:51:58.803 INFO  DAGScheduler - Submitting ResultStage 283 (ParallelCollectionRDD[1388] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
14:51:58.804 INFO  MemoryStore - Block broadcast_585 stored as values in memory (estimated size 3.0 KiB, free 1917.6 MiB)
14:51:58.804 INFO  MemoryStore - Block broadcast_585_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1917.6 MiB)
14:51:58.804 INFO  BlockManagerInfo - Added broadcast_585_piece0 in memory on localhost:44923 (size: 1778.0 B, free: 1919.6 MiB)
14:51:58.804 INFO  SparkContext - Created broadcast 585 from broadcast at DAGScheduler.scala:1580
14:51:58.805 INFO  DAGScheduler - Submitting 4 missing tasks from ResultStage 283 (ParallelCollectionRDD[1388] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
14:51:58.805 INFO  TaskSchedulerImpl - Adding task set 283.0 with 4 tasks resource profile 0
14:51:58.805 INFO  TaskSetManager - Starting task 0.0 in stage 283.0 (TID 366) (localhost, executor driver, partition 0, PROCESS_LOCAL, 169886 bytes) 
14:51:58.806 INFO  TaskSetManager - Starting task 1.0 in stage 283.0 (TID 367) (localhost, executor driver, partition 1, PROCESS_LOCAL, 170093 bytes) 
14:51:58.806 INFO  TaskSetManager - Starting task 2.0 in stage 283.0 (TID 368) (localhost, executor driver, partition 2, PROCESS_LOCAL, 170081 bytes) 
14:51:58.806 INFO  TaskSetManager - Starting task 3.0 in stage 283.0 (TID 369) (localhost, executor driver, partition 3, PROCESS_LOCAL, 169419 bytes) 
14:51:58.806 INFO  Executor - Running task 1.0 in stage 283.0 (TID 367)
14:51:58.806 INFO  Executor - Running task 0.0 in stage 283.0 (TID 366)
14:51:58.806 INFO  Executor - Running task 3.0 in stage 283.0 (TID 369)
14:51:58.806 INFO  Executor - Running task 2.0 in stage 283.0 (TID 368)
14:51:58.813 INFO  Executor - Finished task 1.0 in stage 283.0 (TID 367). 163454 bytes result sent to driver
14:51:58.814 INFO  Executor - Finished task 3.0 in stage 283.0 (TID 369). 162780 bytes result sent to driver
14:51:58.814 INFO  BlockManagerInfo - Removed broadcast_582_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.6 MiB)
14:51:58.815 INFO  Executor - Finished task 0.0 in stage 283.0 (TID 366). 163204 bytes result sent to driver
14:51:58.815 INFO  Executor - Finished task 2.0 in stage 283.0 (TID 368). 163442 bytes result sent to driver
14:51:58.814 INFO  TaskSetManager - Finished task 1.0 in stage 283.0 (TID 367) in 9 ms on localhost (executor driver) (1/4)
14:51:58.816 INFO  BlockManagerInfo - Removed broadcast_581_piece0 on localhost:44923 in memory (size: 187.0 B, free: 1919.6 MiB)
14:51:58.816 INFO  TaskSetManager - Finished task 3.0 in stage 283.0 (TID 369) in 10 ms on localhost (executor driver) (2/4)
14:51:58.816 INFO  TaskSetManager - Finished task 0.0 in stage 283.0 (TID 366) in 11 ms on localhost (executor driver) (3/4)
14:51:58.816 INFO  TaskSetManager - Finished task 2.0 in stage 283.0 (TID 368) in 10 ms on localhost (executor driver) (4/4)
14:51:58.816 INFO  TaskSchedulerImpl - Removed TaskSet 283.0, whose tasks have all completed, from pool 
14:51:58.816 INFO  DAGScheduler - ResultStage 283 (collect at ReadsSparkSourceUnitTest.java:154) finished in 0.013 s
14:51:58.817 INFO  DAGScheduler - Job 219 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:58.817 INFO  TaskSchedulerImpl - Killing all running tasks in stage 283: Stage finished
14:51:58.817 INFO  DAGScheduler - Job 219 finished: collect at ReadsSparkSourceUnitTest.java:154, took 0.013840 s
14:51:58.817 INFO  BlockManagerInfo - Removed broadcast_574_piece0 on localhost:44923 in memory (size: 187.0 B, free: 1919.6 MiB)
14:51:58.817 INFO  BlockManagerInfo - Removed broadcast_583_piece0 on localhost:44923 in memory (size: 50.3 KiB, free: 1919.7 MiB)
14:51:58.818 INFO  BlockManagerInfo - Removed broadcast_577_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.7 MiB)
14:51:58.818 INFO  BlockManagerInfo - Removed broadcast_578_piece0 on localhost:44923 in memory (size: 1778.0 B, free: 1919.7 MiB)
14:51:58.819 INFO  BlockManagerInfo - Removed broadcast_579_piece0 on localhost:44923 in memory (size: 103.6 KiB, free: 1919.9 MiB)
14:51:58.820 INFO  BlockManagerInfo - Removed broadcast_575_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.9 MiB)
14:51:58.820 INFO  BlockManagerInfo - Removed broadcast_576_piece0 on localhost:44923 in memory (size: 187.0 B, free: 1919.9 MiB)
14:51:58.820 INFO  BlockManagerInfo - Removed broadcast_580_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.9 MiB)
14:51:58.828 INFO  FileInputFormat - Total input files to process : 2
14:51:58.865 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:155
14:51:58.865 INFO  DAGScheduler - Got job 220 (collect at ReadsSparkSourceUnitTest.java:155) with 2 output partitions
14:51:58.865 INFO  DAGScheduler - Final stage: ResultStage 284 (collect at ReadsSparkSourceUnitTest.java:155)
14:51:58.865 INFO  DAGScheduler - Parents of final stage: List()
14:51:58.865 INFO  DAGScheduler - Missing parents: List()
14:51:58.865 INFO  DAGScheduler - Submitting ResultStage 284 (MapPartitionsRDD[1395] at filter at ReadsSparkSource.java:96), which has no missing parents
14:51:58.882 INFO  MemoryStore - Block broadcast_586 stored as values in memory (estimated size 426.2 KiB, free 1919.2 MiB)
14:51:58.883 INFO  MemoryStore - Block broadcast_586_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1919.1 MiB)
14:51:58.883 INFO  BlockManagerInfo - Added broadcast_586_piece0 in memory on localhost:44923 (size: 153.7 KiB, free: 1919.8 MiB)
14:51:58.884 INFO  SparkContext - Created broadcast 586 from broadcast at DAGScheduler.scala:1580
14:51:58.884 INFO  DAGScheduler - Submitting 2 missing tasks from ResultStage 284 (MapPartitionsRDD[1395] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0, 1))
14:51:58.884 INFO  TaskSchedulerImpl - Adding task set 284.0 with 2 tasks resource profile 0
14:51:58.884 INFO  TaskSetManager - Starting task 0.0 in stage 284.0 (TID 370) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7928 bytes) 
14:51:58.885 INFO  TaskSetManager - Starting task 1.0 in stage 284.0 (TID 371) (localhost, executor driver, partition 1, PROCESS_LOCAL, 7928 bytes) 
14:51:58.885 INFO  Executor - Running task 1.0 in stage 284.0 (TID 371)
14:51:58.885 INFO  Executor - Running task 0.0 in stage 284.0 (TID 370)
14:51:58.927 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.sharded.bam/part-r-00001.bam:0+131967
14:51:58.927 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.sharded.bam/part-r-00000.bam:0+131967
14:51:58.941 INFO  Executor - Finished task 0.0 in stage 284.0 (TID 370). 325590 bytes result sent to driver
14:51:58.941 INFO  Executor - Finished task 1.0 in stage 284.0 (TID 371). 325590 bytes result sent to driver
14:51:58.942 INFO  TaskSetManager - Finished task 0.0 in stage 284.0 (TID 370) in 58 ms on localhost (executor driver) (1/2)
14:51:58.942 INFO  TaskSetManager - Finished task 1.0 in stage 284.0 (TID 371) in 58 ms on localhost (executor driver) (2/2)
14:51:58.942 INFO  TaskSchedulerImpl - Removed TaskSet 284.0, whose tasks have all completed, from pool 
14:51:58.942 INFO  DAGScheduler - ResultStage 284 (collect at ReadsSparkSourceUnitTest.java:155) finished in 0.077 s
14:51:58.942 INFO  DAGScheduler - Job 220 is finished. Cancelling potential speculative or zombie tasks for this job
14:51:58.942 INFO  TaskSchedulerImpl - Killing all running tasks in stage 284: Stage finished
14:51:58.942 INFO  DAGScheduler - Job 220 finished: collect at ReadsSparkSourceUnitTest.java:155, took 0.077413 s
14:51:58.953 INFO  MiniDFSCluster - starting cluster: numNameNodes=1, numDataNodes=1
14:51:58.954 INFO  NameNode - Formatting using clusterid: testClusterID
14:51:58.954 INFO  FSEditLog - Edit logging is async:true
14:51:58.962 INFO  FSNamesystem - KeyProvider: null
14:51:58.962 INFO  FSNamesystem - fsLock is fair: true
14:51:58.962 INFO  FSNamesystem - Detailed lock hold time metrics enabled: false
14:51:58.963 INFO  FSNamesystem - fsOwner                = runner (auth:SIMPLE)
14:51:58.963 INFO  FSNamesystem - supergroup             = supergroup
14:51:58.963 INFO  FSNamesystem - isPermissionEnabled    = true
14:51:58.963 INFO  FSNamesystem - isStoragePolicyEnabled = true
14:51:58.963 INFO  FSNamesystem - HA Enabled: false
14:51:58.963 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
14:51:58.963 INFO  DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
14:51:58.963 INFO  DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
14:51:58.963 INFO  BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
14:51:58.963 INFO  BlockManager - The block deletion will start around 2026 Mar 04 14:51:58
14:51:58.963 INFO  GSet - Computing capacity for map BlocksMap
14:51:58.963 INFO  GSet - VM type       = 64-bit
14:51:58.963 INFO  GSet - 2.0% max memory 3.4 GB = 70 MB
14:51:58.963 INFO  GSet - capacity      = 2^23 = 8388608 entries
14:51:58.966 INFO  BlockManager - Storage policy satisfier is disabled
14:51:58.966 INFO  BlockManager - dfs.block.access.token.enable = false
14:51:58.966 INFO  BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
14:51:58.966 INFO  BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
14:51:58.966 INFO  BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
14:51:58.966 INFO  BlockManager - defaultReplication         = 1
14:51:58.966 INFO  BlockManager - maxReplication             = 512
14:51:58.966 INFO  BlockManager - minReplication             = 1
14:51:58.966 INFO  BlockManager - maxReplicationStreams      = 2
14:51:58.966 INFO  BlockManager - redundancyRecheckInterval  = 3000ms
14:51:58.966 INFO  BlockManager - encryptDataTransfer        = false
14:51:58.966 INFO  BlockManager - maxNumBlocksToLog          = 1000
14:51:58.966 INFO  GSet - Computing capacity for map INodeMap
14:51:58.966 INFO  GSet - VM type       = 64-bit
14:51:58.966 INFO  GSet - 1.0% max memory 3.4 GB = 35 MB
14:51:58.966 INFO  GSet - capacity      = 2^22 = 4194304 entries
14:51:58.967 INFO  FSDirectory - ACLs enabled? true
14:51:58.967 INFO  FSDirectory - POSIX ACL inheritance enabled? true
14:51:58.967 INFO  FSDirectory - XAttrs enabled? true
14:51:58.967 INFO  NameNode - Caching file names occurring more than 10 times
14:51:58.967 INFO  SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
14:51:58.967 INFO  SnapshotManager - SkipList is disabled
14:51:58.967 INFO  GSet - Computing capacity for map cachedBlocks
14:51:58.967 INFO  GSet - VM type       = 64-bit
14:51:58.967 INFO  GSet - 0.25% max memory 3.4 GB = 8.8 MB
14:51:58.968 INFO  GSet - capacity      = 2^20 = 1048576 entries
14:51:58.968 INFO  TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
14:51:58.968 INFO  TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
14:51:58.968 INFO  TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
14:51:58.968 INFO  FSNamesystem - Retry cache on namenode is enabled
14:51:58.968 INFO  FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14:51:58.968 INFO  GSet - Computing capacity for map NameNodeRetryCache
14:51:58.968 INFO  GSet - VM type       = 64-bit
14:51:58.968 INFO  GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
14:51:58.968 INFO  GSet - capacity      = 2^17 = 131072 entries
14:51:58.969 INFO  FSImage - Allocated new BlockPoolId: BP-959324454-10.1.0.125-1772635918969
14:51:58.971 INFO  Storage - Storage directory /tmp/minicluster_storage3675117217179407932/name-0-1 has been successfully formatted.
14:51:58.972 INFO  Storage - Storage directory /tmp/minicluster_storage3675117217179407932/name-0-2 has been successfully formatted.
14:51:58.981 INFO  FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage3675117217179407932/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression
14:51:58.983 INFO  FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage3675117217179407932/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression
14:51:58.985 INFO  FSImageFormatProtobuf - Image file /tmp/minicluster_storage3675117217179407932/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 416 bytes saved in 0 seconds .
14:51:58.987 INFO  FSImageFormatProtobuf - Image file /tmp/minicluster_storage3675117217179407932/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 416 bytes saved in 0 seconds .
14:51:58.989 INFO  NNStorageRetentionManager - Going to retain 1 images with txid >= 0
14:51:58.992 INFO  FSNamesystem - Stopping services started for active state
14:51:58.992 INFO  FSNamesystem - Stopping services started for standby state
14:51:58.992 INFO  NameNode - createNameNode []
14:51:58.994 WARN  MetricsConfig - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
14:51:58.995 INFO  MetricsSystemImpl - Scheduled Metric snapshot period at 10 second(s).
14:51:58.995 INFO  MetricsSystemImpl - NameNode metrics system started
14:51:58.996 INFO  NameNodeUtils - fs.defaultFS is hdfs://127.0.0.1:0
14:51:59.006 INFO  JvmPauseMonitor - Starting JVM pause monitor
14:51:59.007 INFO  DFSUtil - Filter initializers set : org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
14:51:59.007 INFO  DFSUtil - Starting Web-server for hdfs at: http://localhost:0
14:51:59.008 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
14:51:59.009 WARN  HttpRequestLog - Jetty request log can only be enabled using Log4j
14:51:59.010 INFO  HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
14:51:59.010 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
14:51:59.010 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
14:51:59.011 INFO  HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context hdfs
14:51:59.011 INFO  HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context static
14:51:59.012 INFO  HttpServer2 - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
14:51:59.012 INFO  HttpServer2 - Jetty bound to port 36349
14:51:59.012 INFO  Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
14:51:59.014 INFO  session - DefaultSessionIdManager workerName=node0
14:51:59.014 INFO  session - No SessionScavenger set, using defaults
14:51:59.014 INFO  session - node0 Scavenging every 660000ms
14:51:59.015 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
14:51:59.015 INFO  ContextHandler - Started o.e.j.s.ServletContextHandler@4e1ea9a4{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
14:51:59.106 INFO  ContextHandler - Started o.e.j.w.WebAppContext@24234746{hdfs,/,file:///tmp/jetty-localhost-36349-hadoop-hdfs-3_3_6-tests_jar-_-any-15555145096110027833/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
14:51:59.107 INFO  AbstractConnector - Started ServerConnector@65f60f81{HTTP/1.1, (http/1.1)}{localhost:36349}
14:51:59.107 INFO  Server - Started @78474ms
14:51:59.108 INFO  FSEditLog - Edit logging is async:true
14:51:59.117 INFO  FSNamesystem - KeyProvider: null
14:51:59.117 INFO  FSNamesystem - fsLock is fair: true
14:51:59.117 INFO  FSNamesystem - Detailed lock hold time metrics enabled: false
14:51:59.117 INFO  FSNamesystem - fsOwner                = runner (auth:SIMPLE)
14:51:59.117 INFO  FSNamesystem - supergroup             = supergroup
14:51:59.117 INFO  FSNamesystem - isPermissionEnabled    = true
14:51:59.117 INFO  FSNamesystem - isStoragePolicyEnabled = true
14:51:59.117 INFO  FSNamesystem - HA Enabled: false
14:51:59.117 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
14:51:59.117 INFO  DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
14:51:59.117 INFO  DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
14:51:59.118 INFO  BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
14:51:59.118 INFO  BlockManager - The block deletion will start around 2026 Mar 04 14:51:59
14:51:59.118 INFO  GSet - Computing capacity for map BlocksMap
14:51:59.118 INFO  GSet - VM type       = 64-bit
14:51:59.118 INFO  GSet - 2.0% max memory 3.4 GB = 70 MB
14:51:59.118 INFO  GSet - capacity      = 2^23 = 8388608 entries
14:51:59.123 INFO  BlockManagerInfo - Removed broadcast_586_piece0 on localhost:44923 in memory (size: 153.7 KiB, free: 1919.9 MiB)
14:51:59.123 INFO  BlockManagerInfo - Removed broadcast_584_piece0 on localhost:44923 in memory (size: 50.3 KiB, free: 1920.0 MiB)
14:51:59.124 INFO  BlockManager - Storage policy satisfier is disabled
14:51:59.124 INFO  BlockManager - dfs.block.access.token.enable = false
14:51:59.124 INFO  BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
14:51:59.124 INFO  BlockManagerInfo - Removed broadcast_585_piece0 on localhost:44923 in memory (size: 1778.0 B, free: 1920.0 MiB)
14:51:59.124 INFO  BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
14:51:59.124 INFO  BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
14:51:59.124 INFO  BlockManager - defaultReplication         = 1
14:51:59.124 INFO  BlockManager - maxReplication             = 512
14:51:59.124 INFO  BlockManager - minReplication             = 1
14:51:59.124 INFO  BlockManager - maxReplicationStreams      = 2
14:51:59.124 INFO  BlockManager - redundancyRecheckInterval  = 3000ms
14:51:59.124 INFO  BlockManager - encryptDataTransfer        = false
14:51:59.124 INFO  BlockManager - maxNumBlocksToLog          = 1000
14:51:59.125 INFO  GSet - Computing capacity for map INodeMap
14:51:59.125 INFO  GSet - VM type       = 64-bit
14:51:59.125 INFO  GSet - 1.0% max memory 3.4 GB = 35 MB
14:51:59.125 INFO  GSet - capacity      = 2^22 = 4194304 entries
14:51:59.133 INFO  FSDirectory - ACLs enabled? true
14:51:59.133 INFO  FSDirectory - POSIX ACL inheritance enabled? true
14:51:59.133 INFO  FSDirectory - XAttrs enabled? true
14:51:59.133 INFO  NameNode - Caching file names occurring more than 10 times
14:51:59.133 INFO  SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
14:51:59.133 INFO  SnapshotManager - SkipList is disabled
14:51:59.133 INFO  GSet - Computing capacity for map cachedBlocks
14:51:59.133 INFO  GSet - VM type       = 64-bit
14:51:59.133 INFO  GSet - 0.25% max memory 3.4 GB = 8.8 MB
14:51:59.133 INFO  GSet - capacity      = 2^20 = 1048576 entries
14:51:59.133 INFO  TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
14:51:59.133 INFO  TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
14:51:59.133 INFO  TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
14:51:59.133 INFO  FSNamesystem - Retry cache on namenode is enabled
14:51:59.133 INFO  FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14:51:59.133 INFO  GSet - Computing capacity for map NameNodeRetryCache
14:51:59.133 INFO  GSet - VM type       = 64-bit
14:51:59.134 INFO  GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
14:51:59.134 INFO  GSet - capacity      = 2^17 = 131072 entries
14:51:59.135 INFO  Storage - Lock on /tmp/minicluster_storage3675117217179407932/name-0-1/in_use.lock acquired by nodename 3348@runnervmnay03
14:51:59.136 INFO  Storage - Lock on /tmp/minicluster_storage3675117217179407932/name-0-2/in_use.lock acquired by nodename 3348@runnervmnay03
14:51:59.139 INFO  FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage3675117217179407932/name-0-1/current
14:51:59.139 INFO  FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage3675117217179407932/name-0-2/current
14:51:59.139 INFO  FSImage - No edit log streams selected.
14:51:59.139 INFO  FSImage - Planning to load image: FSImageFile(file=/tmp/minicluster_storage3675117217179407932/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
14:51:59.140 INFO  FSImageFormatPBINode - Loading 1 INodes.
14:51:59.143 INFO  FSImageFormatPBINode - Successfully loaded 1 inodes
14:51:59.143 INFO  FSImageFormatPBINode - Completed update blocks map and name cache, total waiting duration 0ms.
14:51:59.143 INFO  FSImageFormatProtobuf - Loaded FSImage in 0 seconds.
14:51:59.143 INFO  FSImage - Loaded image for txid 0 from /tmp/minicluster_storage3675117217179407932/name-0-1/current/fsimage_0000000000000000000
14:51:59.143 INFO  FSNamesystem - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
14:51:59.146 INFO  FSEditLog - Starting log segment at 1
14:51:59.152 INFO  NameCache - initialized with 0 entries 0 lookups
14:51:59.152 INFO  FSNamesystem - Finished loading FSImage in 18 msecs
14:51:59.152 INFO  NameNode - RPC server is binding to localhost:0
14:51:59.152 INFO  NameNode - Enable NameNode state context:false
14:51:59.152 INFO  CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
14:51:59.153 INFO  Server - Listener at localhost:43391
14:51:59.153 INFO  Server - Starting Socket Reader #1 for port 0
14:51:59.157 INFO  NameNode - Clients are to use localhost:43391 to access this namenode/service.
14:51:59.157 INFO  FSNamesystem - Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans.
14:51:59.174 INFO  LeaseManager - Number of blocks under construction: 0
14:51:59.174 INFO  DatanodeAdminDefaultMonitor - Initialized the Default Decommission and Maintenance monitor
14:51:59.175 INFO  BlockManager - initializing replication queues
14:51:59.175 INFO  StateChange - STATE* Leaving safe mode after 0 secs
14:51:59.175 INFO  StateChange - STATE* Network topology has 0 racks and 0 datanodes
14:51:59.175 INFO  StateChange - STATE* UnderReplicatedBlocks has 0 blocks
14:51:59.176 INFO  BlockManager - Start MarkedDeleteBlockScrubber thread
14:51:59.179 INFO  BlockManager - Total number of blocks            = 0
14:51:59.179 INFO  BlockManager - Number of invalid blocks          = 0
14:51:59.179 INFO  BlockManager - Number of under-replicated blocks = 0
14:51:59.179 INFO  BlockManager - Number of  over-replicated blocks = 0
14:51:59.179 INFO  BlockManager - Number of blocks being written    = 0
14:51:59.179 INFO  StateChange - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 5 msec
14:51:59.181 INFO  Server - IPC Server Responder: starting
14:51:59.181 INFO  Server - IPC Server listener on 0: starting
14:51:59.183 INFO  NameNode - NameNode RPC up at: localhost/127.0.0.1:43391
14:51:59.183 WARN  MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
14:51:59.183 INFO  FSNamesystem - Starting services required for active state
14:51:59.183 INFO  FSDirectory - Initializing quota with 12 thread(s)
14:51:59.183 INFO  FSDirectory - Quota initialization completed in 0 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
14:51:59.184 INFO  CacheReplicationMonitor - Starting CacheReplicationMonitor with interval 30000 milliseconds
14:51:59.185 INFO  MiniDFSCluster - Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/tmp/minicluster_storage3675117217179407932/data/data1,[DISK]file:/tmp/minicluster_storage3675117217179407932/data/data2
14:51:59.186 INFO  ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage3675117217179407932/data/data1
14:51:59.186 INFO  ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage3675117217179407932/data/data2
14:51:59.198 INFO  MetricsSystemImpl - DataNode metrics system started (again)
14:51:59.198 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
14:51:59.198 INFO  BlockScanner - Initialized block scanner with targetBytesPerSec 1048576
14:51:59.198 INFO  DataNode - Configured hostname is 127.0.0.1
14:51:59.198 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
14:51:59.198 INFO  DataNode - Starting DataNode with maxLockedMemory = 0
14:51:59.199 INFO  DataNode - Opened streaming server at /127.0.0.1:36461
14:51:59.199 INFO  DataNode - Balancing bandwidth is 104857600 bytes/s
14:51:59.199 INFO  DataNode - Number threads for balancing is 100
14:51:59.202 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
14:51:59.202 WARN  HttpRequestLog - Jetty request log can only be enabled using Log4j
14:51:59.204 INFO  HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
14:51:59.204 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
14:51:59.204 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
14:51:59.205 INFO  HttpServer2 - Jetty bound to port 38009
14:51:59.205 INFO  Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
14:51:59.206 INFO  session - DefaultSessionIdManager workerName=node0
14:51:59.206 INFO  session - No SessionScavenger set, using defaults
14:51:59.206 INFO  session - node0 Scavenging every 660000ms
14:51:59.207 INFO  ContextHandler - Started o.e.j.s.ServletContextHandler@34ff1cbb{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
14:51:59.341 INFO  ContextHandler - Started o.e.j.w.WebAppContext@525da4e4{datanode,/,file:///tmp/jetty-localhost-38009-hadoop-hdfs-3_3_6-tests_jar-_-any-10457524842723700011/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
14:51:59.342 INFO  AbstractConnector - Started ServerConnector@2465546{HTTP/1.1, (http/1.1)}{localhost:38009}
14:51:59.342 INFO  Server - Started @78709ms
14:51:59.344 WARN  DatanodeHttpServer - Got null for restCsrfPreventionFilter - will not do any filtering.
14:51:59.345 INFO  DatanodeHttpServer - Listening HTTP traffic on /127.0.0.1:33619
14:51:59.345 INFO  JvmPauseMonitor - Starting JVM pause monitor
14:51:59.345 INFO  DataNode - dnUserName = runner
14:51:59.346 INFO  DataNode - supergroup = supergroup
14:51:59.346 INFO  CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
14:51:59.346 INFO  Server - Listener at localhost:39271
14:51:59.347 INFO  Server - Starting Socket Reader #1 for port 0
14:51:59.349 INFO  DataNode - Opened IPC server at /127.0.0.1:39271
14:51:59.365 INFO  DataNode - Refresh request received for nameservices: null
14:51:59.365 INFO  DataNode - Starting BPOfferServices for nameservices: <default>
14:51:59.367 WARN  MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
14:51:59.368 INFO  DataNode - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:43391 starting to offer service
14:51:59.369 INFO  Server - IPC Server Responder: starting
14:51:59.369 INFO  Server - IPC Server listener on 0: starting
14:51:59.372 INFO  DataNode - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:43391
14:51:59.372 INFO  Storage - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2)
14:51:59.372 INFO  Storage - Lock on /tmp/minicluster_storage3675117217179407932/data/data1/in_use.lock acquired by nodename 3348@runnervmnay03
14:51:59.373 INFO  Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage3675117217179407932/data/data1 is not formatted for namespace 92716075. Formatting...
14:51:59.373 INFO  Storage - Generated new storageID DS-fb6cd716-c438-4bd9-b705-db1128168dc6 for directory /tmp/minicluster_storage3675117217179407932/data/data1 
14:51:59.376 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=datanodeReport	src=null	dst=null	perm=null	proto=rpc
14:51:59.376 INFO  MiniDFSCluster - dnInfo.length != numDataNodes
14:51:59.376 INFO  MiniDFSCluster - Waiting for cluster to become active
14:51:59.377 INFO  Storage - Lock on /tmp/minicluster_storage3675117217179407932/data/data2/in_use.lock acquired by nodename 3348@runnervmnay03
14:51:59.377 INFO  Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage3675117217179407932/data/data2 is not formatted for namespace 92716075. Formatting...
14:51:59.377 INFO  Storage - Generated new storageID DS-94dd31fe-54dc-4e9a-a8e1-00a2d96660cb for directory /tmp/minicluster_storage3675117217179407932/data/data2 
14:51:59.394 INFO  Storage - Analyzing storage directories for bpid BP-959324454-10.1.0.125-1772635918969
14:51:59.394 INFO  Storage - Locking is disabled for /tmp/minicluster_storage3675117217179407932/data/data1/current/BP-959324454-10.1.0.125-1772635918969
14:51:59.394 INFO  Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage3675117217179407932/data/data1 and block pool id BP-959324454-10.1.0.125-1772635918969 is not formatted. Formatting ...
14:51:59.394 INFO  Storage - Formatting block pool BP-959324454-10.1.0.125-1772635918969 directory /tmp/minicluster_storage3675117217179407932/data/data1/current/BP-959324454-10.1.0.125-1772635918969/current
14:51:59.409 INFO  Storage - Analyzing storage directories for bpid BP-959324454-10.1.0.125-1772635918969
14:51:59.409 INFO  Storage - Locking is disabled for /tmp/minicluster_storage3675117217179407932/data/data2/current/BP-959324454-10.1.0.125-1772635918969
14:51:59.409 INFO  Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage3675117217179407932/data/data2 and block pool id BP-959324454-10.1.0.125-1772635918969 is not formatted. Formatting ...
14:51:59.409 INFO  Storage - Formatting block pool BP-959324454-10.1.0.125-1772635918969 directory /tmp/minicluster_storage3675117217179407932/data/data2/current/BP-959324454-10.1.0.125-1772635918969/current
14:51:59.411 INFO  DataNode - Setting up storage: nsid=92716075;bpid=BP-959324454-10.1.0.125-1772635918969;lv=-57;nsInfo=lv=-66;cid=testClusterID;nsid=92716075;c=1772635918969;bpid=BP-959324454-10.1.0.125-1772635918969;dnuuid=null
14:51:59.411 INFO  DataNode - Generated and persisted new Datanode UUID 5200d945-fd7a-443e-ac45-1209777b0dbd
14:51:59.412 INFO  FsDatasetImpl - The datanode lock is a read write lock
14:51:59.413 INFO  FsDatasetImpl - Added new volume: DS-fb6cd716-c438-4bd9-b705-db1128168dc6
14:51:59.413 INFO  FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage3675117217179407932/data/data1, StorageType: DISK
14:51:59.414 INFO  FsDatasetImpl - Added new volume: DS-94dd31fe-54dc-4e9a-a8e1-00a2d96660cb
14:51:59.414 INFO  FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage3675117217179407932/data/data2, StorageType: DISK
14:51:59.414 INFO  MemoryMappableBlockLoader - Initializing cache loader: MemoryMappableBlockLoader.
14:51:59.414 INFO  FsDatasetImpl - Registered FSDatasetState MBean
14:51:59.414 INFO  FsDatasetImpl - Adding block pool BP-959324454-10.1.0.125-1772635918969
14:51:59.415 INFO  FsDatasetImpl - Scanning block pool BP-959324454-10.1.0.125-1772635918969 on volume /tmp/minicluster_storage3675117217179407932/data/data1...
14:51:59.415 WARN  FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage3675117217179407932/data/data1/current/BP-959324454-10.1.0.125-1772635918969/current, will proceed with Du for space computation calculation, 
14:51:59.415 INFO  FsDatasetImpl - Scanning block pool BP-959324454-10.1.0.125-1772635918969 on volume /tmp/minicluster_storage3675117217179407932/data/data2...
14:51:59.416 WARN  FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage3675117217179407932/data/data2/current/BP-959324454-10.1.0.125-1772635918969/current, will proceed with Du for space computation calculation, 
14:51:59.430 INFO  FsDatasetImpl - Time taken to scan block pool BP-959324454-10.1.0.125-1772635918969 on /tmp/minicluster_storage3675117217179407932/data/data2: 15ms
14:51:59.433 INFO  FsDatasetImpl - Time taken to scan block pool BP-959324454-10.1.0.125-1772635918969 on /tmp/minicluster_storage3675117217179407932/data/data1: 18ms
14:51:59.433 INFO  FsDatasetImpl - Total time to scan all replicas for block pool BP-959324454-10.1.0.125-1772635918969: 19ms
14:51:59.434 INFO  FsDatasetImpl - Adding replicas to map for block pool BP-959324454-10.1.0.125-1772635918969 on volume /tmp/minicluster_storage3675117217179407932/data/data1...
14:51:59.434 INFO  BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage3675117217179407932/data/data1/current/BP-959324454-10.1.0.125-1772635918969/current/replicas doesn't exist 
14:51:59.434 INFO  FsDatasetImpl - Adding replicas to map for block pool BP-959324454-10.1.0.125-1772635918969 on volume /tmp/minicluster_storage3675117217179407932/data/data2...
14:51:59.434 INFO  BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage3675117217179407932/data/data2/current/BP-959324454-10.1.0.125-1772635918969/current/replicas doesn't exist 
14:51:59.434 INFO  FsDatasetImpl - Time to add replicas to map for block pool BP-959324454-10.1.0.125-1772635918969 on volume /tmp/minicluster_storage3675117217179407932/data/data1: 0ms
14:51:59.434 INFO  FsDatasetImpl - Time to add replicas to map for block pool BP-959324454-10.1.0.125-1772635918969 on volume /tmp/minicluster_storage3675117217179407932/data/data2: 0ms
14:51:59.434 INFO  FsDatasetImpl - Total time to add all replicas to map for block pool BP-959324454-10.1.0.125-1772635918969: 0ms
14:51:59.434 INFO  ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage3675117217179407932/data/data1
14:51:59.435 INFO  DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage3675117217179407932/data/data1
14:51:59.435 INFO  ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage3675117217179407932/data/data2
14:51:59.435 INFO  DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage3675117217179407932/data/data2
14:51:59.435 INFO  VolumeScanner - Now scanning bpid BP-959324454-10.1.0.125-1772635918969 on volume /tmp/minicluster_storage3675117217179407932/data/data2
14:51:59.436 WARN  DirectoryScanner - dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value above 1000 ms/sec. Assuming default value of -1
14:51:59.436 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3675117217179407932/data/data2, DS-94dd31fe-54dc-4e9a-a8e1-00a2d96660cb): finished scanning block pool BP-959324454-10.1.0.125-1772635918969
14:51:59.436 INFO  DirectoryScanner - Periodic Directory Tree Verification scan starting in 7976660ms with interval of 21600000ms and throttle limit of -1ms/s
14:51:59.436 INFO  DataNode - Block pool BP-959324454-10.1.0.125-1772635918969 (Datanode Uuid 5200d945-fd7a-443e-ac45-1209777b0dbd) service to localhost/127.0.0.1:43391 beginning handshake with NN
14:51:59.436 INFO  VolumeScanner - Now scanning bpid BP-959324454-10.1.0.125-1772635918969 on volume /tmp/minicluster_storage3675117217179407932/data/data1
14:51:59.436 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3675117217179407932/data/data2, DS-94dd31fe-54dc-4e9a-a8e1-00a2d96660cb): no suitable block pools found to scan.  Waiting 1814399999 ms.
14:51:59.436 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3675117217179407932/data/data1, DS-fb6cd716-c438-4bd9-b705-db1128168dc6): finished scanning block pool BP-959324454-10.1.0.125-1772635918969
14:51:59.436 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3675117217179407932/data/data1, DS-fb6cd716-c438-4bd9-b705-db1128168dc6): no suitable block pools found to scan.  Waiting 1814399999 ms.
14:51:59.437 INFO  StateChange - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:36461, datanodeUuid=5200d945-fd7a-443e-ac45-1209777b0dbd, infoPort=33619, infoSecurePort=0, ipcPort=39271, storageInfo=lv=-57;cid=testClusterID;nsid=92716075;c=1772635918969) storage 5200d945-fd7a-443e-ac45-1209777b0dbd
14:51:59.437 INFO  NetworkTopology - Adding a new node: /default-rack/127.0.0.1:36461
14:51:59.437 INFO  BlockReportLeaseManager - Registered DN 5200d945-fd7a-443e-ac45-1209777b0dbd (127.0.0.1:36461).
14:51:59.437 INFO  DataNode - Block pool BP-959324454-10.1.0.125-1772635918969 (Datanode Uuid 5200d945-fd7a-443e-ac45-1209777b0dbd) service to localhost/127.0.0.1:43391 successfully registered with NN
14:51:59.437 INFO  DataNode - For namenode localhost/127.0.0.1:43391 using BLOCKREPORT_INTERVAL of 21600000msecs CACHEREPORT_INTERVAL of 10000msecs Initial delay: 0msecs; heartBeatInterval=3000
14:51:59.437 INFO  DataNode - Starting IBR Task Handler.
14:51:59.438 INFO  DatanodeDescriptor - Adding new storage ID DS-fb6cd716-c438-4bd9-b705-db1128168dc6 for DN 127.0.0.1:36461
14:51:59.438 INFO  DatanodeDescriptor - Adding new storage ID DS-94dd31fe-54dc-4e9a-a8e1-00a2d96660cb for DN 127.0.0.1:36461
14:51:59.438 INFO  DataNode - After receiving heartbeat response, updating state of namenode localhost:43391 to active
14:51:59.439 INFO  BlockStateChange - BLOCK* processReport 0x64f48c7bb1a5f267 with lease ID 0x9929cdc30e8a9d7f: Processing first storage report for DS-fb6cd716-c438-4bd9-b705-db1128168dc6 from datanode DatanodeRegistration(127.0.0.1:36461, datanodeUuid=5200d945-fd7a-443e-ac45-1209777b0dbd, infoPort=33619, infoSecurePort=0, ipcPort=39271, storageInfo=lv=-57;cid=testClusterID;nsid=92716075;c=1772635918969)
14:51:59.439 INFO  BlockStateChange - BLOCK* processReport 0x64f48c7bb1a5f267 with lease ID 0x9929cdc30e8a9d7f: from storage DS-fb6cd716-c438-4bd9-b705-db1128168dc6 node DatanodeRegistration(127.0.0.1:36461, datanodeUuid=5200d945-fd7a-443e-ac45-1209777b0dbd, infoPort=33619, infoSecurePort=0, ipcPort=39271, storageInfo=lv=-57;cid=testClusterID;nsid=92716075;c=1772635918969), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0
14:51:59.439 INFO  BlockStateChange - BLOCK* processReport 0x64f48c7bb1a5f267 with lease ID 0x9929cdc30e8a9d7f: Processing first storage report for DS-94dd31fe-54dc-4e9a-a8e1-00a2d96660cb from datanode DatanodeRegistration(127.0.0.1:36461, datanodeUuid=5200d945-fd7a-443e-ac45-1209777b0dbd, infoPort=33619, infoSecurePort=0, ipcPort=39271, storageInfo=lv=-57;cid=testClusterID;nsid=92716075;c=1772635918969)
14:51:59.439 INFO  BlockStateChange - BLOCK* processReport 0x64f48c7bb1a5f267 with lease ID 0x9929cdc30e8a9d7f: from storage DS-94dd31fe-54dc-4e9a-a8e1-00a2d96660cb node DatanodeRegistration(127.0.0.1:36461, datanodeUuid=5200d945-fd7a-443e-ac45-1209777b0dbd, infoPort=33619, infoSecurePort=0, ipcPort=39271, storageInfo=lv=-57;cid=testClusterID;nsid=92716075;c=1772635918969), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
14:51:59.439 INFO  DataNode - Successfully sent block report 0x64f48c7bb1a5f267 with lease ID 0x9929cdc30e8a9d7f to namenode: localhost/127.0.0.1:43391,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 0 msecs to generate and 1 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
14:51:59.439 INFO  DataNode - Got finalize command for block pool BP-959324454-10.1.0.125-1772635918969
14:51:59.477 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=datanodeReport	src=null	dst=null	perm=null	proto=rpc
14:51:59.477 INFO  MiniDFSCluster - Cluster is active
14:51:59.480 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:51:59.482 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/user/runner/hdfs.cram	dst=null	perm=runner:supergroup:rw-r--r--	proto=rpc
14:51:59.483 INFO  StateChange - BLOCK* allocate blk_1073741825_1001, replicas=127.0.0.1:36461 for /user/runner/hdfs.cram
14:51:59.485 INFO  DataNode - Receiving BP-959324454-10.1.0.125-1772635918969:blk_1073741825_1001 src: /127.0.0.1:35926 dest: /127.0.0.1:36461
14:51:59.486 INFO  clienttrace - src: /127.0.0.1:35926, dest: /127.0.0.1:36461, bytes: 50619, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_62088566_1, offset: 0, srvID: 5200d945-fd7a-443e-ac45-1209777b0dbd, blockid: BP-959324454-10.1.0.125-1772635918969:blk_1073741825_1001, duration(ns): 620226
14:51:59.487 INFO  DataNode - PacketResponder: BP-959324454-10.1.0.125-1772635918969:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating
14:51:59.487 INFO  StateChange - DIR* completeFile: /user/runner/hdfs.cram is closed by DFSClient_NONMAPREDUCE_62088566_1
14:51:59.488 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
14:51:59.489 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/user/runner/hdfs.fasta	dst=null	perm=runner:supergroup:rw-r--r--	proto=rpc
14:51:59.490 INFO  StateChange - BLOCK* allocate blk_1073741826_1002, replicas=127.0.0.1:36461 for /user/runner/hdfs.fasta
14:51:59.491 INFO  DataNode - Receiving BP-959324454-10.1.0.125-1772635918969:blk_1073741826_1002 src: /127.0.0.1:35938 dest: /127.0.0.1:36461
14:51:59.495 INFO  clienttrace - src: /127.0.0.1:35938, dest: /127.0.0.1:36461, bytes: 1016671, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_62088566_1, offset: 0, srvID: 5200d945-fd7a-443e-ac45-1209777b0dbd, blockid: BP-959324454-10.1.0.125-1772635918969:blk_1073741826_1002, duration(ns): 2560755
14:51:59.495 INFO  DataNode - PacketResponder: BP-959324454-10.1.0.125-1772635918969:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating
14:51:59.496 INFO  FSNamesystem - BLOCK* blk_1073741826_1002 is COMMITTED but not COMPLETE(numNodes= 0 <  minimum = 1) in file /user/runner/hdfs.fasta
14:51:59.897 INFO  StateChange - DIR* completeFile: /user/runner/hdfs.fasta is closed by DFSClient_NONMAPREDUCE_62088566_1
14:51:59.898 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
14:51:59.899 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/user/runner/hdfs.fasta.fai	dst=null	perm=runner:supergroup:rw-r--r--	proto=rpc
14:51:59.900 INFO  StateChange - BLOCK* allocate blk_1073741827_1003, replicas=127.0.0.1:36461 for /user/runner/hdfs.fasta.fai
14:51:59.900 INFO  DataNode - Receiving BP-959324454-10.1.0.125-1772635918969:blk_1073741827_1003 src: /127.0.0.1:35948 dest: /127.0.0.1:36461
14:51:59.902 INFO  clienttrace - src: /127.0.0.1:35948, dest: /127.0.0.1:36461, bytes: 19, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_62088566_1, offset: 0, srvID: 5200d945-fd7a-443e-ac45-1209777b0dbd, blockid: BP-959324454-10.1.0.125-1772635918969:blk_1073741827_1003, duration(ns): 572949
14:51:59.902 INFO  DataNode - PacketResponder: BP-959324454-10.1.0.125-1772635918969:blk_1073741827_1003, type=LAST_IN_PIPELINE terminating
14:51:59.902 INFO  FSNamesystem - BLOCK* blk_1073741827_1003 is COMMITTED but not COMPLETE(numNodes= 0 <  minimum = 1) in file /user/runner/hdfs.fasta.fai
14:52:00.303 INFO  StateChange - DIR* completeFile: /user/runner/hdfs.fasta.fai is closed by DFSClient_NONMAPREDUCE_62088566_1
14:52:00.304 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
14:52:00.304 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.305 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.305 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.306 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.307 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram.crai	dst=null	perm=null	proto=rpc
14:52:00.307 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.crai	dst=null	perm=null	proto=rpc
14:52:00.307 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
14:52:00.308 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
14:52:00.308 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
14:52:00.309 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
14:52:00.310 WARN  DFSUtil - Unexpected value for data transfer bytes=23 duration=0
14:52:00.312 WARN  DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
14:52:00.313 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.313 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.314 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram.crai	dst=null	perm=null	proto=rpc
14:52:00.314 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.crai	dst=null	perm=null	proto=rpc
14:52:00.314 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.315 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.316 WARN  DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
14:52:00.317 INFO  MemoryStore - Block broadcast_587 stored as values in memory (estimated size 472.0 B, free 1920.0 MiB)
14:52:00.317 INFO  MemoryStore - Block broadcast_587_piece0 stored as bytes in memory (estimated size 129.0 B, free 1920.0 MiB)
14:52:00.317 INFO  BlockManagerInfo - Added broadcast_587_piece0 in memory on localhost:44923 (size: 129.0 B, free: 1920.0 MiB)
14:52:00.318 INFO  SparkContext - Created broadcast 587 from broadcast at CramSource.java:114
14:52:00.319 INFO  MemoryStore - Block broadcast_588 stored as values in memory (estimated size 297.9 KiB, free 1919.7 MiB)
14:52:00.326 INFO  MemoryStore - Block broadcast_588_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
14:52:00.326 INFO  BlockManagerInfo - Added broadcast_588_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1920.0 MiB)
14:52:00.326 INFO  SparkContext - Created broadcast 588 from newAPIHadoopFile at PathSplitSource.java:96
14:52:00.342 INFO  MemoryStore - Block broadcast_589 stored as values in memory (estimated size 536.0 B, free 1919.7 MiB)
14:52:00.343 INFO  MemoryStore - Block broadcast_589_piece0 stored as bytes in memory (estimated size 187.0 B, free 1919.7 MiB)
14:52:00.343 INFO  BlockManagerInfo - Added broadcast_589_piece0 in memory on localhost:44923 (size: 187.0 B, free: 1920.0 MiB)
14:52:00.343 INFO  SparkContext - Created broadcast 589 from broadcast at CramSource.java:114
14:52:00.344 INFO  MemoryStore - Block broadcast_590 stored as values in memory (estimated size 297.9 KiB, free 1919.4 MiB)
14:52:00.350 INFO  MemoryStore - Block broadcast_590_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.3 MiB)
14:52:00.350 INFO  BlockManagerInfo - Added broadcast_590_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.9 MiB)
14:52:00.351 INFO  SparkContext - Created broadcast 590 from newAPIHadoopFile at PathSplitSource.java:96
14:52:00.366 INFO  FileInputFormat - Total input files to process : 1
14:52:00.393 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:244
14:52:00.393 INFO  DAGScheduler - Got job 221 (collect at ReadsSparkSourceUnitTest.java:244) with 1 output partitions
14:52:00.393 INFO  DAGScheduler - Final stage: ResultStage 285 (collect at ReadsSparkSourceUnitTest.java:244)
14:52:00.393 INFO  DAGScheduler - Parents of final stage: List()
14:52:00.393 INFO  DAGScheduler - Missing parents: List()
14:52:00.393 INFO  DAGScheduler - Submitting ResultStage 285 (MapPartitionsRDD[1405] at filter at ReadsSparkSource.java:96), which has no missing parents
14:52:00.405 INFO  MemoryStore - Block broadcast_591 stored as values in memory (estimated size 286.8 KiB, free 1919.0 MiB)
14:52:00.406 INFO  MemoryStore - Block broadcast_591_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.9 MiB)
14:52:00.406 INFO  BlockManagerInfo - Added broadcast_591_piece0 in memory on localhost:44923 (size: 103.6 KiB, free: 1919.8 MiB)
14:52:00.407 INFO  SparkContext - Created broadcast 591 from broadcast at DAGScheduler.scala:1580
14:52:00.407 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 285 (MapPartitionsRDD[1405] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:52:00.407 INFO  TaskSchedulerImpl - Adding task set 285.0 with 1 tasks resource profile 0
14:52:00.407 INFO  TaskSetManager - Starting task 0.0 in stage 285.0 (TID 372) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes) 
14:52:00.408 INFO  Executor - Running task 0.0 in stage 285.0 (TID 372)
14:52:00.431 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
14:52:00.440 INFO  Executor - Finished task 0.0 in stage 285.0 (TID 372). 154101 bytes result sent to driver
14:52:00.442 INFO  TaskSetManager - Finished task 0.0 in stage 285.0 (TID 372) in 35 ms on localhost (executor driver) (1/1)
14:52:00.442 INFO  TaskSchedulerImpl - Removed TaskSet 285.0, whose tasks have all completed, from pool 
14:52:00.442 INFO  DAGScheduler - ResultStage 285 (collect at ReadsSparkSourceUnitTest.java:244) finished in 0.048 s
14:52:00.442 INFO  DAGScheduler - Job 221 is finished. Cancelling potential speculative or zombie tasks for this job
14:52:00.442 INFO  TaskSchedulerImpl - Killing all running tasks in stage 285: Stage finished
14:52:00.442 INFO  DAGScheduler - Job 221 finished: collect at ReadsSparkSourceUnitTest.java:244, took 0.049249 s
14:52:00.443 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
14:52:00.443 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.444 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.444 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.445 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.446 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram.crai	dst=null	perm=null	proto=rpc
14:52:00.446 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.crai	dst=null	perm=null	proto=rpc
14:52:00.446 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
14:52:00.447 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
14:52:00.447 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
14:52:00.448 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
14:52:00.449 WARN  DFSUtil - Unexpected value for data transfer bytes=23 duration=0
14:52:00.451 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.451 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.452 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram.crai	dst=null	perm=null	proto=rpc
14:52:00.452 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.crai	dst=null	perm=null	proto=rpc
14:52:00.453 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.453 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.454 WARN  DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
14:52:00.454 INFO  MemoryStore - Block broadcast_592 stored as values in memory (estimated size 472.0 B, free 1918.9 MiB)
14:52:00.455 INFO  MemoryStore - Block broadcast_592_piece0 stored as bytes in memory (estimated size 129.0 B, free 1918.9 MiB)
14:52:00.455 INFO  BlockManagerInfo - Added broadcast_592_piece0 in memory on localhost:44923 (size: 129.0 B, free: 1919.8 MiB)
14:52:00.455 INFO  SparkContext - Created broadcast 592 from broadcast at CramSource.java:114
14:52:00.456 INFO  MemoryStore - Block broadcast_593 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
14:52:00.462 INFO  MemoryStore - Block broadcast_593_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
14:52:00.463 INFO  BlockManagerInfo - Added broadcast_593_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.8 MiB)
14:52:00.463 INFO  SparkContext - Created broadcast 593 from newAPIHadoopFile at PathSplitSource.java:96
14:52:00.478 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.478 INFO  FileInputFormat - Total input files to process : 1
14:52:00.479 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.505 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:245
14:52:00.505 INFO  DAGScheduler - Got job 222 (collect at ReadsSparkSourceUnitTest.java:245) with 1 output partitions
14:52:00.505 INFO  DAGScheduler - Final stage: ResultStage 286 (collect at ReadsSparkSourceUnitTest.java:245)
14:52:00.505 INFO  DAGScheduler - Parents of final stage: List()
14:52:00.505 INFO  DAGScheduler - Missing parents: List()
14:52:00.505 INFO  DAGScheduler - Submitting ResultStage 286 (MapPartitionsRDD[1411] at filter at ReadsSparkSource.java:96), which has no missing parents
14:52:00.517 INFO  MemoryStore - Block broadcast_594 stored as values in memory (estimated size 286.8 KiB, free 1918.3 MiB)
14:52:00.518 INFO  MemoryStore - Block broadcast_594_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.2 MiB)
14:52:00.518 INFO  BlockManagerInfo - Added broadcast_594_piece0 in memory on localhost:44923 (size: 103.6 KiB, free: 1919.7 MiB)
14:52:00.518 INFO  SparkContext - Created broadcast 594 from broadcast at DAGScheduler.scala:1580
14:52:00.518 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 286 (MapPartitionsRDD[1411] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:52:00.518 INFO  TaskSchedulerImpl - Adding task set 286.0 with 1 tasks resource profile 0
14:52:00.519 INFO  TaskSetManager - Starting task 0.0 in stage 286.0 (TID 373) (localhost, executor driver, partition 0, ANY, 7797 bytes) 
14:52:00.519 INFO  Executor - Running task 0.0 in stage 286.0 (TID 373)
14:52:00.542 INFO  NewHadoopRDD - Input split: hdfs://localhost:43391/user/runner/hdfs.cram:0+50619
14:52:00.542 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.543 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.cram	dst=null	perm=null	proto=rpc
14:52:00.544 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.cram.crai	dst=null	perm=null	proto=rpc
14:52:00.544 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.crai	dst=null	perm=null	proto=rpc
14:52:00.544 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
14:52:00.545 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta	dst=null	perm=null	proto=rpc
14:52:00.545 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
14:52:00.546 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.fasta.fai	dst=null	perm=null	proto=rpc
14:52:00.548 WARN  DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
14:52:00.558 WARN  DFSUtil - Unexpected value for data transfer bytes=99943 duration=0
14:52:00.561 INFO  Executor - Finished task 0.0 in stage 286.0 (TID 373). 154058 bytes result sent to driver
14:52:00.561 INFO  TaskSetManager - Finished task 0.0 in stage 286.0 (TID 373) in 42 ms on localhost (executor driver) (1/1)
14:52:00.562 INFO  TaskSchedulerImpl - Removed TaskSet 286.0, whose tasks have all completed, from pool 
14:52:00.562 INFO  DAGScheduler - ResultStage 286 (collect at ReadsSparkSourceUnitTest.java:245) finished in 0.056 s
14:52:00.562 INFO  DAGScheduler - Job 222 is finished. Cancelling potential speculative or zombie tasks for this job
14:52:00.562 INFO  TaskSchedulerImpl - Killing all running tasks in stage 286: Stage finished
14:52:00.562 INFO  DAGScheduler - Job 222 finished: collect at ReadsSparkSourceUnitTest.java:245, took 0.057052 s
14:52:00.565 INFO  MiniDFSCluster - Shutting down the Mini HDFS Cluster
14:52:00.565 INFO  MiniDFSCluster - Shutting down DataNode 0
14:52:00.565 INFO  DirectoryScanner - Shutdown has been called
14:52:00.565 INFO  DataNode - Closing all peers.
14:52:00.566 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3675117217179407932/data/data1, DS-fb6cd716-c438-4bd9-b705-db1128168dc6) exiting.
14:52:00.566 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage3675117217179407932/data/data2, DS-94dd31fe-54dc-4e9a-a8e1-00a2d96660cb) exiting.
14:52:00.569 INFO  ContextHandler - Stopped o.e.j.w.WebAppContext@525da4e4{datanode,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
14:52:00.569 INFO  AbstractConnector - Stopped ServerConnector@2465546{HTTP/1.1, (http/1.1)}{localhost:0}
14:52:00.570 INFO  session - node0 Stopped scavenging
14:52:00.570 INFO  ContextHandler - Stopped o.e.j.s.ServletContextHandler@34ff1cbb{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
14:52:00.571 INFO  DataNode - Waiting up to 30 seconds for transfer threads to complete
14:52:00.571 INFO  Server - Stopping server on 39271
14:52:00.572 INFO  Server - Stopping IPC Server listener on 0
14:52:00.572 INFO  Server - Stopping IPC Server Responder
14:52:00.572 ERROR DataNode - Command processor encountered interrupt and exit.
14:52:00.572 WARN  IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
14:52:00.572 WARN  IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
14:52:00.572 WARN  DataNode - Ending block pool service for: Block pool BP-959324454-10.1.0.125-1772635918969 (Datanode Uuid 5200d945-fd7a-443e-ac45-1209777b0dbd) service to localhost/127.0.0.1:43391
14:52:00.572 INFO  DataNode - Removed Block pool BP-959324454-10.1.0.125-1772635918969 (Datanode Uuid 5200d945-fd7a-443e-ac45-1209777b0dbd)
14:52:00.572 INFO  FsDatasetImpl - Removing block pool BP-959324454-10.1.0.125-1772635918969
14:52:00.572 WARN  DataNode - Ending command processor service for: Thread[Command processor,5,main]
14:52:00.573 WARN  CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
14:52:00.573 INFO  FsDatasetAsyncDiskService - Shutting down all async disk service threads
14:52:00.573 INFO  FsDatasetAsyncDiskService - All async disk service threads have been shut down
14:52:00.573 INFO  RamDiskAsyncLazyPersistService - Shutting down all async lazy persist service threads
14:52:00.573 INFO  RamDiskAsyncLazyPersistService - All async lazy persist service threads have been shut down
14:52:00.573 WARN  CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
14:52:00.573 INFO  DataNode - Shutdown complete.
14:52:00.573 INFO  MiniDFSCluster - Shutting down the namenode
14:52:00.573 INFO  FSNamesystem - Stopping services started for active state
14:52:00.574 INFO  FSNamesystem - LazyPersistFileScrubber was interrupted, exiting
14:52:00.574 INFO  FSEditLog - Ending log segment 1, 18
14:52:00.574 INFO  FSNamesystem - NameNodeEditLogRoller was interrupted, exiting
14:52:00.574 INFO  FSEditLog - Number of transactions: 19 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 5 Number of syncs: 15 SyncTimes(ms): 3 1 
14:52:00.574 INFO  FileJournalManager - Finalizing edits file /tmp/minicluster_storage3675117217179407932/name-0-1/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage3675117217179407932/name-0-1/current/edits_0000000000000000001-0000000000000000019
14:52:00.575 INFO  FileJournalManager - Finalizing edits file /tmp/minicluster_storage3675117217179407932/name-0-2/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage3675117217179407932/name-0-2/current/edits_0000000000000000001-0000000000000000019
14:52:00.575 INFO  FSEditLog - FSEditLogAsync was interrupted, exiting
14:52:00.575 INFO  CacheReplicationMonitor - Shutting down CacheReplicationMonitor
14:52:00.575 INFO  Server - Stopping server on 43391
14:52:00.576 INFO  Server - Stopping IPC Server listener on 0
14:52:00.576 INFO  Server - Stopping IPC Server Responder
14:52:00.577 INFO  BlockManager - Stopping MarkedDeleteBlockScrubber.
14:52:00.577 INFO  BlockManager - Stopping RedundancyMonitor.
14:52:00.582 INFO  FSNamesystem - Stopping services started for active state
14:52:00.582 INFO  FSNamesystem - Stopping services started for standby state
14:52:00.583 INFO  ContextHandler - Stopped o.e.j.w.WebAppContext@24234746{hdfs,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
14:52:00.584 INFO  AbstractConnector - Stopped ServerConnector@65f60f81{HTTP/1.1, (http/1.1)}{localhost:0}
14:52:00.584 INFO  session - node0 Stopped scavenging
14:52:00.584 INFO  ContextHandler - Stopped o.e.j.s.ServletContextHandler@4e1ea9a4{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
14:52:00.585 INFO  MetricsSystemImpl - Stopping DataNode metrics system...
14:52:00.585 INFO  MetricsSystemImpl - DataNode metrics system stopped.
14:52:00.585 INFO  MetricsSystemImpl - DataNode metrics system shutdown complete.
14:52:00.595 INFO  MemoryStore - Block broadcast_595 stored as values in memory (estimated size 297.9 KiB, free 1917.9 MiB)
14:52:00.605 INFO  MemoryStore - Block broadcast_595_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.9 MiB)
14:52:00.605 INFO  BlockManagerInfo - Added broadcast_595_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.6 MiB)
14:52:00.605 INFO  SparkContext - Created broadcast 595 from newAPIHadoopFile at PathSplitSource.java:96
14:52:00.635 INFO  MemoryStore - Block broadcast_596 stored as values in memory (estimated size 298.0 KiB, free 1917.6 MiB)
14:52:00.641 INFO  MemoryStore - Block broadcast_596_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.5 MiB)
14:52:00.641 INFO  BlockManagerInfo - Added broadcast_596_piece0 in memory on localhost:44923 (size: 50.3 KiB, free: 1919.6 MiB)
14:52:00.642 INFO  SparkContext - Created broadcast 596 from newAPIHadoopFile at PathSplitSource.java:96
14:52:00.663 INFO  MemoryStore - Block broadcast_597 stored as values in memory (estimated size 298.0 KiB, free 1917.2 MiB)
14:52:00.670 INFO  MemoryStore - Block broadcast_597_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.2 MiB)
14:52:00.670 INFO  BlockManagerInfo - Added broadcast_597_piece0 in memory on localhost:44923 (size: 50.3 KiB, free: 1919.5 MiB)
14:52:00.670 INFO  SparkContext - Created broadcast 597 from newAPIHadoopFile at PathSplitSource.java:96
14:52:00.692 INFO  MemoryStore - Block broadcast_598 stored as values in memory (estimated size 536.0 B, free 1917.2 MiB)
14:52:00.692 INFO  MemoryStore - Block broadcast_598_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.2 MiB)
14:52:00.693 INFO  BlockManagerInfo - Added broadcast_598_piece0 in memory on localhost:44923 (size: 187.0 B, free: 1919.5 MiB)
14:52:00.693 INFO  SparkContext - Created broadcast 598 from broadcast at CramSource.java:114
14:52:00.694 INFO  MemoryStore - Block broadcast_599 stored as values in memory (estimated size 297.9 KiB, free 1916.9 MiB)
14:52:00.700 INFO  MemoryStore - Block broadcast_599_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1916.9 MiB)
14:52:00.700 INFO  BlockManagerInfo - Added broadcast_599_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.5 MiB)
14:52:00.700 INFO  SparkContext - Created broadcast 599 from newAPIHadoopFile at PathSplitSource.java:96
14:52:00.716 INFO  MemoryStore - Block broadcast_600 stored as values in memory (estimated size 608.0 B, free 1916.9 MiB)
14:52:00.716 INFO  MemoryStore - Block broadcast_600_piece0 stored as bytes in memory (estimated size 206.0 B, free 1916.9 MiB)
14:52:00.716 INFO  BlockManagerInfo - Added broadcast_600_piece0 in memory on localhost:44923 (size: 206.0 B, free: 1919.5 MiB)
14:52:00.716 INFO  SparkContext - Created broadcast 600 from broadcast at CramSource.java:114
14:52:00.717 INFO  MemoryStore - Block broadcast_601 stored as values in memory (estimated size 297.9 KiB, free 1916.6 MiB)
14:52:00.723 INFO  MemoryStore - Block broadcast_601_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1916.5 MiB)
14:52:00.723 INFO  BlockManagerInfo - Added broadcast_601_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.4 MiB)
14:52:00.724 INFO  SparkContext - Created broadcast 601 from newAPIHadoopFile at PathSplitSource.java:96
14:52:00.730 INFO  BlockManagerInfo - Removed broadcast_597_piece0 on localhost:44923 in memory (size: 50.3 KiB, free: 1919.5 MiB)
14:52:00.730 INFO  BlockManagerInfo - Removed broadcast_593_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.5 MiB)
14:52:00.731 INFO  BlockManagerInfo - Removed broadcast_590_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.6 MiB)
14:52:00.731 INFO  BlockManagerInfo - Removed broadcast_589_piece0 on localhost:44923 in memory (size: 187.0 B, free: 1919.6 MiB)
14:52:00.732 INFO  BlockManagerInfo - Removed broadcast_595_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.6 MiB)
14:52:00.732 INFO  BlockManagerInfo - Removed broadcast_591_piece0 on localhost:44923 in memory (size: 103.6 KiB, free: 1919.7 MiB)
14:52:00.732 INFO  BlockManagerInfo - Removed broadcast_596_piece0 on localhost:44923 in memory (size: 50.3 KiB, free: 1919.8 MiB)
14:52:00.733 INFO  BlockManagerInfo - Removed broadcast_592_piece0 on localhost:44923 in memory (size: 129.0 B, free: 1919.8 MiB)
14:52:00.733 INFO  BlockManagerInfo - Removed broadcast_588_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.8 MiB)
14:52:00.733 INFO  BlockManagerInfo - Removed broadcast_587_piece0 on localhost:44923 in memory (size: 129.0 B, free: 1919.8 MiB)
14:52:00.734 INFO  BlockManagerInfo - Removed broadcast_594_piece0 on localhost:44923 in memory (size: 103.6 KiB, free: 1919.9 MiB)
14:52:00.734 INFO  BlockManagerInfo - Removed broadcast_599_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1920.0 MiB)
14:52:00.736 INFO  BlockManagerInfo - Removed broadcast_598_piece0 on localhost:44923 in memory (size: 187.0 B, free: 1920.0 MiB)
14:52:00.752 INFO  MemoryStore - Block broadcast_602 stored as values in memory (estimated size 298.0 KiB, free 1919.4 MiB)
14:52:00.759 INFO  MemoryStore - Block broadcast_602_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.3 MiB)
14:52:00.759 INFO  BlockManagerInfo - Added broadcast_602_piece0 in memory on localhost:44923 (size: 50.3 KiB, free: 1919.9 MiB)
14:52:00.759 INFO  SparkContext - Created broadcast 602 from newAPIHadoopFile at PathSplitSource.java:96
14:52:00.780 INFO  FileInputFormat - Total input files to process : 1
14:52:00.819 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:163
14:52:00.820 INFO  DAGScheduler - Got job 223 (collect at ReadsSparkSourceUnitTest.java:163) with 1 output partitions
14:52:00.820 INFO  DAGScheduler - Final stage: ResultStage 287 (collect at ReadsSparkSourceUnitTest.java:163)
14:52:00.820 INFO  DAGScheduler - Parents of final stage: List()
14:52:00.820 INFO  DAGScheduler - Missing parents: List()
14:52:00.820 INFO  DAGScheduler - Submitting ResultStage 287 (MapPartitionsRDD[1441] at filter at ReadsSparkSource.java:96), which has no missing parents
14:52:00.837 INFO  MemoryStore - Block broadcast_603 stored as values in memory (estimated size 426.2 KiB, free 1918.9 MiB)
14:52:00.838 INFO  MemoryStore - Block broadcast_603_piece0 stored as bytes in memory (estimated size 153.6 KiB, free 1918.8 MiB)
14:52:00.838 INFO  BlockManagerInfo - Added broadcast_603_piece0 in memory on localhost:44923 (size: 153.6 KiB, free: 1919.8 MiB)
14:52:00.838 INFO  SparkContext - Created broadcast 603 from broadcast at DAGScheduler.scala:1580
14:52:00.839 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 287 (MapPartitionsRDD[1441] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:52:00.839 INFO  TaskSchedulerImpl - Adding task set 287.0 with 1 tasks resource profile 0
14:52:00.839 INFO  TaskSetManager - Starting task 0.0 in stage 287.0 (TID 374) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7883 bytes) 
14:52:00.839 INFO  Executor - Running task 0.0 in stage 287.0 (TID 374)
14:52:00.871 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam:0+221635
14:52:00.882 INFO  Executor - Finished task 0.0 in stage 287.0 (TID 374). 650141 bytes result sent to driver
14:52:00.883 INFO  TaskSetManager - Finished task 0.0 in stage 287.0 (TID 374) in 44 ms on localhost (executor driver) (1/1)
14:52:00.883 INFO  TaskSchedulerImpl - Removed TaskSet 287.0, whose tasks have all completed, from pool 
14:52:00.883 INFO  DAGScheduler - ResultStage 287 (collect at ReadsSparkSourceUnitTest.java:163) finished in 0.063 s
14:52:00.883 INFO  DAGScheduler - Job 223 is finished. Cancelling potential speculative or zombie tasks for this job
14:52:00.884 INFO  TaskSchedulerImpl - Killing all running tasks in stage 287: Stage finished
14:52:00.884 INFO  DAGScheduler - Job 223 finished: collect at ReadsSparkSourceUnitTest.java:163, took 0.064052 s
14:52:00.886 INFO  MemoryStore - Block broadcast_604 stored as values in memory (estimated size 208.0 B, free 1918.8 MiB)
14:52:00.888 INFO  MemoryStore - Block broadcast_604_piece0 stored as bytes in memory (estimated size 222.0 B, free 1918.8 MiB)
14:52:00.888 INFO  BlockManagerInfo - Added broadcast_604_piece0 in memory on localhost:44923 (size: 222.0 B, free: 1919.8 MiB)
14:52:00.888 INFO  SparkContext - Created broadcast 604 from broadcast at AbstractBinarySamSource.java:82
14:52:00.889 INFO  MemoryStore - Block broadcast_605 stored as values in memory (estimated size 297.9 KiB, free 1918.5 MiB)
14:52:00.895 INFO  MemoryStore - Block broadcast_605_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.4 MiB)
14:52:00.896 INFO  BlockManagerInfo - Added broadcast_605_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.7 MiB)
14:52:00.896 INFO  SparkContext - Created broadcast 605 from newAPIHadoopFile at PathSplitSource.java:96
14:52:00.918 INFO  FileInputFormat - Total input files to process : 1
14:52:00.922 INFO  SparkContext - Starting job: count at ReadsSparkSourceUnitTest.java:265
14:52:00.922 INFO  DAGScheduler - Got job 224 (count at ReadsSparkSourceUnitTest.java:265) with 1 output partitions
14:52:00.922 INFO  DAGScheduler - Final stage: ResultStage 288 (count at ReadsSparkSourceUnitTest.java:265)
14:52:00.922 INFO  DAGScheduler - Parents of final stage: List()
14:52:00.923 INFO  DAGScheduler - Missing parents: List()
14:52:00.923 INFO  DAGScheduler - Submitting ResultStage 288 (MapPartitionsRDD[1448] at filter at ReadsSparkSource.java:96), which has no missing parents
14:52:00.940 INFO  MemoryStore - Block broadcast_606 stored as values in memory (estimated size 426.2 KiB, free 1918.0 MiB)
14:52:00.941 INFO  MemoryStore - Block broadcast_606_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1917.8 MiB)
14:52:00.941 INFO  BlockManagerInfo - Added broadcast_606_piece0 in memory on localhost:44923 (size: 153.7 KiB, free: 1919.6 MiB)
14:52:00.942 INFO  SparkContext - Created broadcast 606 from broadcast at DAGScheduler.scala:1580
14:52:00.942 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 288 (MapPartitionsRDD[1448] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:52:00.942 INFO  TaskSchedulerImpl - Adding task set 288.0 with 1 tasks resource profile 0
14:52:00.942 INFO  TaskSetManager - Starting task 0.0 in stage 288.0 (TID 375) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes) 
14:52:00.943 INFO  Executor - Running task 0.0 in stage 288.0 (TID 375)
14:52:00.975 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
14:52:00.980 INFO  Executor - Finished task 0.0 in stage 288.0 (TID 375). 989 bytes result sent to driver
14:52:00.980 INFO  TaskSetManager - Finished task 0.0 in stage 288.0 (TID 375) in 38 ms on localhost (executor driver) (1/1)
14:52:00.980 INFO  TaskSchedulerImpl - Removed TaskSet 288.0, whose tasks have all completed, from pool 
14:52:00.980 INFO  DAGScheduler - ResultStage 288 (count at ReadsSparkSourceUnitTest.java:265) finished in 0.057 s
14:52:00.981 INFO  DAGScheduler - Job 224 is finished. Cancelling potential speculative or zombie tasks for this job
14:52:00.981 INFO  TaskSchedulerImpl - Killing all running tasks in stage 288: Stage finished
14:52:00.981 INFO  DAGScheduler - Job 224 finished: count at ReadsSparkSourceUnitTest.java:265, took 0.058611 s
14:52:00.983 INFO  MemoryStore - Block broadcast_607 stored as values in memory (estimated size 160.0 B, free 1917.8 MiB)
14:52:00.984 INFO  MemoryStore - Block broadcast_607_piece0 stored as bytes in memory (estimated size 212.0 B, free 1917.8 MiB)
14:52:00.984 INFO  BlockManagerInfo - Added broadcast_607_piece0 in memory on localhost:44923 (size: 212.0 B, free: 1919.6 MiB)
14:52:00.984 INFO  SparkContext - Created broadcast 607 from broadcast at AbstractBinarySamSource.java:82
14:52:00.986 INFO  MemoryStore - Block broadcast_608 stored as values in memory (estimated size 298.0 KiB, free 1917.6 MiB)
14:52:00.996 INFO  MemoryStore - Block broadcast_608_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.5 MiB)
14:52:00.997 INFO  BlockManagerInfo - Added broadcast_608_piece0 in memory on localhost:44923 (size: 50.3 KiB, free: 1919.5 MiB)
14:52:00.997 INFO  SparkContext - Created broadcast 608 from newAPIHadoopFile at PathSplitSource.java:96
14:52:01.019 INFO  FileInputFormat - Total input files to process : 1
14:52:01.023 INFO  SparkContext - Starting job: count at ReadsSparkSourceUnitTest.java:286
14:52:01.023 INFO  DAGScheduler - Got job 225 (count at ReadsSparkSourceUnitTest.java:286) with 1 output partitions
14:52:01.023 INFO  DAGScheduler - Final stage: ResultStage 289 (count at ReadsSparkSourceUnitTest.java:286)
14:52:01.023 INFO  DAGScheduler - Parents of final stage: List()
14:52:01.023 INFO  DAGScheduler - Missing parents: List()
14:52:01.023 INFO  DAGScheduler - Submitting ResultStage 289 (MapPartitionsRDD[1455] at filter at ReadsSparkSource.java:96), which has no missing parents
14:52:01.040 INFO  MemoryStore - Block broadcast_609 stored as values in memory (estimated size 426.2 KiB, free 1917.1 MiB)
14:52:01.041 INFO  MemoryStore - Block broadcast_609_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1916.9 MiB)
14:52:01.041 INFO  BlockManagerInfo - Added broadcast_609_piece0 in memory on localhost:44923 (size: 153.7 KiB, free: 1919.4 MiB)
14:52:01.042 INFO  SparkContext - Created broadcast 609 from broadcast at DAGScheduler.scala:1580
14:52:01.042 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 289 (MapPartitionsRDD[1455] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:52:01.042 INFO  TaskSchedulerImpl - Adding task set 289.0 with 1 tasks resource profile 0
14:52:01.042 INFO  TaskSetManager - Starting task 0.0 in stage 289.0 (TID 376) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7901 bytes) 
14:52:01.043 INFO  Executor - Running task 0.0 in stage 289.0 (TID 376)
14:52:01.075 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/engine/CEUTrio.HiSeq.WGS.b37.NA12878.snippet_with_unmapped.bam:0+2184
14:52:01.079 INFO  Executor - Finished task 0.0 in stage 289.0 (TID 376). 989 bytes result sent to driver
14:52:01.079 INFO  TaskSetManager - Finished task 0.0 in stage 289.0 (TID 376) in 37 ms on localhost (executor driver) (1/1)
14:52:01.080 INFO  TaskSchedulerImpl - Removed TaskSet 289.0, whose tasks have all completed, from pool 
14:52:01.080 INFO  DAGScheduler - ResultStage 289 (count at ReadsSparkSourceUnitTest.java:286) finished in 0.057 s
14:52:01.080 INFO  DAGScheduler - Job 225 is finished. Cancelling potential speculative or zombie tasks for this job
14:52:01.080 INFO  TaskSchedulerImpl - Killing all running tasks in stage 289: Stage finished
14:52:01.080 INFO  DAGScheduler - Job 225 finished: count at ReadsSparkSourceUnitTest.java:286, took 0.057108 s
14:52:01.083 INFO  MemoryStore - Block broadcast_610 stored as values in memory (estimated size 298.0 KiB, free 1916.6 MiB)
14:52:01.089 INFO  MemoryStore - Block broadcast_610_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1916.6 MiB)
14:52:01.089 INFO  BlockManagerInfo - Added broadcast_610_piece0 in memory on localhost:44923 (size: 50.3 KiB, free: 1919.3 MiB)
14:52:01.089 INFO  SparkContext - Created broadcast 610 from newAPIHadoopFile at PathSplitSource.java:96
14:52:01.112 INFO  MemoryStore - Block broadcast_611 stored as values in memory (estimated size 298.2 KiB, free 1916.3 MiB)
14:52:01.117 INFO  BlockManagerInfo - Removed broadcast_604_piece0 on localhost:44923 in memory (size: 222.0 B, free: 1919.3 MiB)
14:52:01.118 INFO  BlockManagerInfo - Removed broadcast_609_piece0 on localhost:44923 in memory (size: 153.7 KiB, free: 1919.5 MiB)
14:52:01.118 INFO  BlockManagerInfo - Removed broadcast_608_piece0 on localhost:44923 in memory (size: 50.3 KiB, free: 1919.5 MiB)
14:52:01.119 INFO  BlockManagerInfo - Removed broadcast_602_piece0 on localhost:44923 in memory (size: 50.3 KiB, free: 1919.6 MiB)
14:52:01.119 INFO  BlockManagerInfo - Removed broadcast_607_piece0 on localhost:44923 in memory (size: 212.0 B, free: 1919.6 MiB)
14:52:01.119 INFO  BlockManagerInfo - Removed broadcast_605_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.6 MiB)
14:52:01.120 INFO  BlockManagerInfo - Removed broadcast_603_piece0 on localhost:44923 in memory (size: 153.6 KiB, free: 1919.8 MiB)
14:52:01.120 INFO  BlockManagerInfo - Removed broadcast_606_piece0 on localhost:44923 in memory (size: 153.7 KiB, free: 1919.9 MiB)
14:52:01.125 INFO  MemoryStore - Block broadcast_611_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.0 MiB)
14:52:01.125 INFO  BlockManagerInfo - Added broadcast_611_piece0 in memory on localhost:44923 (size: 50.3 KiB, free: 1919.9 MiB)
14:52:01.125 INFO  SparkContext - Created broadcast 611 from newAPIHadoopFile at PathSplitSource.java:96
14:52:01.147 INFO  FileInputFormat - Total input files to process : 1
14:52:01.147 INFO  FileInputFormat - Total input files to process : 1
14:52:01.157 INFO  MiniDFSCluster - starting cluster: numNameNodes=1, numDataNodes=1
14:52:01.158 INFO  NameNode - Formatting using clusterid: testClusterID
14:52:01.158 INFO  FSEditLog - Edit logging is async:true
14:52:01.166 INFO  FSNamesystem - KeyProvider: null
14:52:01.166 INFO  FSNamesystem - fsLock is fair: true
14:52:01.166 INFO  FSNamesystem - Detailed lock hold time metrics enabled: false
14:52:01.166 INFO  FSNamesystem - fsOwner                = runner (auth:SIMPLE)
14:52:01.166 INFO  FSNamesystem - supergroup             = supergroup
14:52:01.166 INFO  FSNamesystem - isPermissionEnabled    = true
14:52:01.166 INFO  FSNamesystem - isStoragePolicyEnabled = true
14:52:01.166 INFO  FSNamesystem - HA Enabled: false
14:52:01.167 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
14:52:01.167 INFO  DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
14:52:01.167 INFO  DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
14:52:01.167 INFO  BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
14:52:01.167 INFO  BlockManager - The block deletion will start around 2026 Mar 04 14:52:01
14:52:01.167 INFO  GSet - Computing capacity for map BlocksMap
14:52:01.167 INFO  GSet - VM type       = 64-bit
14:52:01.167 INFO  GSet - 2.0% max memory 3.4 GB = 70 MB
14:52:01.167 INFO  GSet - capacity      = 2^23 = 8388608 entries
14:52:01.169 INFO  BlockManager - Storage policy satisfier is disabled
14:52:01.169 INFO  BlockManager - dfs.block.access.token.enable = false
14:52:01.169 INFO  BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
14:52:01.169 INFO  BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
14:52:01.169 INFO  BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
14:52:01.169 INFO  BlockManager - defaultReplication         = 1
14:52:01.169 INFO  BlockManager - maxReplication             = 512
14:52:01.169 INFO  BlockManager - minReplication             = 1
14:52:01.169 INFO  BlockManager - maxReplicationStreams      = 2
14:52:01.169 INFO  BlockManager - redundancyRecheckInterval  = 3000ms
14:52:01.169 INFO  BlockManager - encryptDataTransfer        = false
14:52:01.169 INFO  BlockManager - maxNumBlocksToLog          = 1000
14:52:01.169 INFO  GSet - Computing capacity for map INodeMap
14:52:01.169 INFO  GSet - VM type       = 64-bit
14:52:01.169 INFO  GSet - 1.0% max memory 3.4 GB = 35 MB
14:52:01.169 INFO  GSet - capacity      = 2^22 = 4194304 entries
14:52:01.173 INFO  BlockManagerInfo - Removed broadcast_610_piece0 on localhost:44923 in memory (size: 50.3 KiB, free: 1919.9 MiB)
14:52:01.173 INFO  FSDirectory - ACLs enabled? true
14:52:01.173 INFO  FSDirectory - POSIX ACL inheritance enabled? true
14:52:01.173 INFO  FSDirectory - XAttrs enabled? true
14:52:01.173 INFO  NameNode - Caching file names occurring more than 10 times
14:52:01.174 INFO  SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
14:52:01.174 INFO  SnapshotManager - SkipList is disabled
14:52:01.174 INFO  GSet - Computing capacity for map cachedBlocks
14:52:01.174 INFO  GSet - VM type       = 64-bit
14:52:01.174 INFO  BlockManagerInfo - Removed broadcast_611_piece0 on localhost:44923 in memory (size: 50.3 KiB, free: 1920.0 MiB)
14:52:01.174 INFO  GSet - 0.25% max memory 3.4 GB = 8.8 MB
14:52:01.174 INFO  GSet - capacity      = 2^20 = 1048576 entries
14:52:01.176 INFO  TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
14:52:01.176 INFO  TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
14:52:01.176 INFO  TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
14:52:01.176 INFO  FSNamesystem - Retry cache on namenode is enabled
14:52:01.177 INFO  FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14:52:01.177 INFO  GSet - Computing capacity for map NameNodeRetryCache
14:52:01.177 INFO  GSet - VM type       = 64-bit
14:52:01.177 INFO  GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
14:52:01.177 INFO  GSet - capacity      = 2^17 = 131072 entries
14:52:01.177 INFO  FSImage - Allocated new BlockPoolId: BP-1900678018-10.1.0.125-1772635921177
14:52:01.179 INFO  Storage - Storage directory /tmp/minicluster_storage1121274261722281395/name-0-1 has been successfully formatted.
14:52:01.180 INFO  Storage - Storage directory /tmp/minicluster_storage1121274261722281395/name-0-2 has been successfully formatted.
14:52:01.189 INFO  FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage1121274261722281395/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression
14:52:01.190 INFO  FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage1121274261722281395/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression
14:52:01.193 INFO  FSImageFormatProtobuf - Image file /tmp/minicluster_storage1121274261722281395/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
14:52:01.194 INFO  FSImageFormatProtobuf - Image file /tmp/minicluster_storage1121274261722281395/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
14:52:01.195 INFO  NNStorageRetentionManager - Going to retain 1 images with txid >= 0
14:52:01.198 INFO  FSNamesystem - Stopping services started for active state
14:52:01.198 INFO  FSNamesystem - Stopping services started for standby state
14:52:01.199 INFO  NameNode - createNameNode []
14:52:01.200 WARN  MetricsConfig - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
14:52:01.200 INFO  MetricsSystemImpl - Scheduled Metric snapshot period at 10 second(s).
14:52:01.200 INFO  MetricsSystemImpl - NameNode metrics system started
14:52:01.201 INFO  NameNodeUtils - fs.defaultFS is hdfs://127.0.0.1:0
14:52:01.211 INFO  JvmPauseMonitor - Starting JVM pause monitor
14:52:01.211 INFO  DFSUtil - Filter initializers set : org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
14:52:01.212 INFO  DFSUtil - Starting Web-server for hdfs at: http://localhost:0
14:52:01.212 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
14:52:01.213 WARN  HttpRequestLog - Jetty request log can only be enabled using Log4j
14:52:01.214 INFO  HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
14:52:01.214 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
14:52:01.214 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
14:52:01.215 INFO  HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context hdfs
14:52:01.215 INFO  HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context static
14:52:01.215 INFO  HttpServer2 - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
14:52:01.215 INFO  HttpServer2 - Jetty bound to port 46875
14:52:01.215 INFO  Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
14:52:01.217 INFO  session - DefaultSessionIdManager workerName=node0
14:52:01.217 INFO  session - No SessionScavenger set, using defaults
14:52:01.217 INFO  session - node0 Scavenging every 600000ms
14:52:01.217 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
14:52:01.218 INFO  ContextHandler - Started o.e.j.s.ServletContextHandler@61d0cda7{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
14:52:01.312 INFO  ContextHandler - Started o.e.j.w.WebAppContext@571eced6{hdfs,/,file:///tmp/jetty-localhost-46875-hadoop-hdfs-3_3_6-tests_jar-_-any-6616618975947332568/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
14:52:01.312 INFO  AbstractConnector - Started ServerConnector@4e71f0b6{HTTP/1.1, (http/1.1)}{localhost:46875}
14:52:01.312 INFO  Server - Started @80679ms
14:52:01.313 INFO  FSEditLog - Edit logging is async:true
14:52:01.322 INFO  FSNamesystem - KeyProvider: null
14:52:01.322 INFO  FSNamesystem - fsLock is fair: true
14:52:01.322 INFO  FSNamesystem - Detailed lock hold time metrics enabled: false
14:52:01.322 INFO  FSNamesystem - fsOwner                = runner (auth:SIMPLE)
14:52:01.322 INFO  FSNamesystem - supergroup             = supergroup
14:52:01.322 INFO  FSNamesystem - isPermissionEnabled    = true
14:52:01.322 INFO  FSNamesystem - isStoragePolicyEnabled = true
14:52:01.322 INFO  FSNamesystem - HA Enabled: false
14:52:01.322 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
14:52:01.322 INFO  DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
14:52:01.322 INFO  DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
14:52:01.323 INFO  BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
14:52:01.323 INFO  BlockManager - The block deletion will start around 2026 Mar 04 14:52:01
14:52:01.323 INFO  GSet - Computing capacity for map BlocksMap
14:52:01.323 INFO  GSet - VM type       = 64-bit
14:52:01.323 INFO  GSet - 2.0% max memory 3.4 GB = 70 MB
14:52:01.323 INFO  GSet - capacity      = 2^23 = 8388608 entries
14:52:01.327 INFO  BlockManager - Storage policy satisfier is disabled
14:52:01.327 INFO  BlockManager - dfs.block.access.token.enable = false
14:52:01.327 INFO  BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
14:52:01.327 INFO  BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
14:52:01.327 INFO  BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
14:52:01.327 INFO  BlockManager - defaultReplication         = 1
14:52:01.327 INFO  BlockManager - maxReplication             = 512
14:52:01.327 INFO  BlockManager - minReplication             = 1
14:52:01.327 INFO  BlockManager - maxReplicationStreams      = 2
14:52:01.327 INFO  BlockManager - redundancyRecheckInterval  = 3000ms
14:52:01.327 INFO  BlockManager - encryptDataTransfer        = false
14:52:01.327 INFO  BlockManager - maxNumBlocksToLog          = 1000
14:52:01.328 INFO  GSet - Computing capacity for map INodeMap
14:52:01.328 INFO  GSet - VM type       = 64-bit
14:52:01.328 INFO  GSet - 1.0% max memory 3.4 GB = 35 MB
14:52:01.328 INFO  GSet - capacity      = 2^22 = 4194304 entries
14:52:01.329 INFO  FSDirectory - ACLs enabled? true
14:52:01.329 INFO  FSDirectory - POSIX ACL inheritance enabled? true
14:52:01.329 INFO  FSDirectory - XAttrs enabled? true
14:52:01.329 INFO  NameNode - Caching file names occurring more than 10 times
14:52:01.329 INFO  SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
14:52:01.329 INFO  SnapshotManager - SkipList is disabled
14:52:01.329 INFO  GSet - Computing capacity for map cachedBlocks
14:52:01.329 INFO  GSet - VM type       = 64-bit
14:52:01.329 INFO  GSet - 0.25% max memory 3.4 GB = 8.8 MB
14:52:01.329 INFO  GSet - capacity      = 2^20 = 1048576 entries
14:52:01.329 INFO  TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
14:52:01.329 INFO  TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
14:52:01.329 INFO  TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
14:52:01.329 INFO  FSNamesystem - Retry cache on namenode is enabled
14:52:01.329 INFO  FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14:52:01.329 INFO  GSet - Computing capacity for map NameNodeRetryCache
14:52:01.329 INFO  GSet - VM type       = 64-bit
14:52:01.329 INFO  GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
14:52:01.329 INFO  GSet - capacity      = 2^17 = 131072 entries
14:52:01.332 INFO  Storage - Lock on /tmp/minicluster_storage1121274261722281395/name-0-1/in_use.lock acquired by nodename 3348@runnervmnay03
14:52:01.332 INFO  Storage - Lock on /tmp/minicluster_storage1121274261722281395/name-0-2/in_use.lock acquired by nodename 3348@runnervmnay03
14:52:01.333 INFO  FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage1121274261722281395/name-0-1/current
14:52:01.333 INFO  FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage1121274261722281395/name-0-2/current
14:52:01.333 INFO  FSImage - No edit log streams selected.
14:52:01.333 INFO  FSImage - Planning to load image: FSImageFile(file=/tmp/minicluster_storage1121274261722281395/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
14:52:01.334 INFO  FSImageFormatPBINode - Loading 1 INodes.
14:52:01.335 INFO  FSImageFormatPBINode - Successfully loaded 1 inodes
14:52:01.335 INFO  FSImageFormatPBINode - Completed update blocks map and name cache, total waiting duration 0ms.
14:52:01.335 INFO  FSImageFormatProtobuf - Loaded FSImage in 0 seconds.
14:52:01.335 INFO  FSImage - Loaded image for txid 0 from /tmp/minicluster_storage1121274261722281395/name-0-1/current/fsimage_0000000000000000000
14:52:01.335 INFO  FSNamesystem - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
14:52:01.336 INFO  FSEditLog - Starting log segment at 1
14:52:01.339 INFO  NameCache - initialized with 0 entries 0 lookups
14:52:01.339 INFO  FSNamesystem - Finished loading FSImage in 9 msecs
14:52:01.339 INFO  NameNode - RPC server is binding to localhost:0
14:52:01.340 INFO  NameNode - Enable NameNode state context:false
14:52:01.340 INFO  CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
14:52:01.340 INFO  Server - Listener at localhost:33789
14:52:01.340 INFO  Server - Starting Socket Reader #1 for port 0
14:52:01.342 INFO  NameNode - Clients are to use localhost:33789 to access this namenode/service.
14:52:01.343 INFO  FSNamesystem - Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans.
14:52:01.358 INFO  LeaseManager - Number of blocks under construction: 0
14:52:01.358 INFO  DatanodeAdminDefaultMonitor - Initialized the Default Decommission and Maintenance monitor
14:52:01.359 INFO  BlockManager - Start MarkedDeleteBlockScrubber thread
14:52:01.359 INFO  BlockManager - initializing replication queues
14:52:01.360 INFO  StateChange - STATE* Leaving safe mode after 0 secs
14:52:01.360 INFO  StateChange - STATE* Network topology has 0 racks and 0 datanodes
14:52:01.360 INFO  StateChange - STATE* UnderReplicatedBlocks has 0 blocks
14:52:01.364 INFO  Server - IPC Server Responder: starting
14:52:01.364 INFO  Server - IPC Server listener on 0: starting
14:52:01.365 INFO  BlockManager - Total number of blocks            = 0
14:52:01.365 INFO  BlockManager - Number of invalid blocks          = 0
14:52:01.365 INFO  BlockManager - Number of under-replicated blocks = 0
14:52:01.365 INFO  BlockManager - Number of  over-replicated blocks = 0
14:52:01.365 INFO  BlockManager - Number of blocks being written    = 0
14:52:01.365 INFO  StateChange - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 5 msec
14:52:01.366 INFO  NameNode - NameNode RPC up at: localhost/127.0.0.1:33789
14:52:01.366 WARN  MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
14:52:01.366 INFO  FSNamesystem - Starting services required for active state
14:52:01.366 INFO  FSDirectory - Initializing quota with 12 thread(s)
14:52:01.367 INFO  FSDirectory - Quota initialization completed in 1 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
14:52:01.367 INFO  CacheReplicationMonitor - Starting CacheReplicationMonitor with interval 30000 milliseconds
14:52:01.368 INFO  MiniDFSCluster - Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/tmp/minicluster_storage1121274261722281395/data/data1,[DISK]file:/tmp/minicluster_storage1121274261722281395/data/data2
14:52:01.369 INFO  ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage1121274261722281395/data/data1
14:52:01.369 INFO  ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage1121274261722281395/data/data2
14:52:01.382 INFO  MetricsSystemImpl - DataNode metrics system started (again)
14:52:01.382 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
14:52:01.382 INFO  BlockScanner - Initialized block scanner with targetBytesPerSec 1048576
14:52:01.382 INFO  DataNode - Configured hostname is 127.0.0.1
14:52:01.382 INFO  Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
14:52:01.382 INFO  DataNode - Starting DataNode with maxLockedMemory = 0
14:52:01.383 INFO  DataNode - Opened streaming server at /127.0.0.1:33523
14:52:01.383 INFO  DataNode - Balancing bandwidth is 104857600 bytes/s
14:52:01.383 INFO  DataNode - Number threads for balancing is 100
14:52:01.385 WARN  AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
14:52:01.385 WARN  HttpRequestLog - Jetty request log can only be enabled using Log4j
14:52:01.386 INFO  HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
14:52:01.387 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
14:52:01.387 INFO  HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
14:52:01.387 INFO  HttpServer2 - Jetty bound to port 40991
14:52:01.387 INFO  Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
14:52:01.388 INFO  session - DefaultSessionIdManager workerName=node0
14:52:01.388 INFO  session - No SessionScavenger set, using defaults
14:52:01.388 INFO  session - node0 Scavenging every 660000ms
14:52:01.389 INFO  ContextHandler - Started o.e.j.s.ServletContextHandler@1e74d9f4{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
14:52:01.436 INFO  BlockManagerInfo - Removed broadcast_600_piece0 on localhost:44923 in memory (size: 206.0 B, free: 1920.0 MiB)
14:52:01.437 INFO  BlockManagerInfo - Removed broadcast_601_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1920.0 MiB)
14:52:01.502 INFO  ContextHandler - Started o.e.j.w.WebAppContext@14e7fc8a{datanode,/,file:///tmp/jetty-localhost-40991-hadoop-hdfs-3_3_6-tests_jar-_-any-5090116418809913298/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
14:52:01.502 INFO  AbstractConnector - Started ServerConnector@3d26544d{HTTP/1.1, (http/1.1)}{localhost:40991}
14:52:01.503 INFO  Server - Started @80869ms
14:52:01.504 WARN  DatanodeHttpServer - Got null for restCsrfPreventionFilter - will not do any filtering.
14:52:01.505 INFO  DatanodeHttpServer - Listening HTTP traffic on /127.0.0.1:35609
14:52:01.505 INFO  JvmPauseMonitor - Starting JVM pause monitor
14:52:01.505 INFO  DataNode - dnUserName = runner
14:52:01.505 INFO  DataNode - supergroup = supergroup
14:52:01.505 INFO  CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
14:52:01.506 INFO  Server - Listener at localhost:44887
14:52:01.506 INFO  Server - Starting Socket Reader #1 for port 0
14:52:01.508 INFO  DataNode - Opened IPC server at /127.0.0.1:44887
14:52:01.518 INFO  DataNode - Refresh request received for nameservices: null
14:52:01.519 INFO  DataNode - Starting BPOfferServices for nameservices: <default>
14:52:01.519 WARN  MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
14:52:01.519 INFO  DataNode - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:33789 starting to offer service
14:52:01.520 INFO  Server - IPC Server Responder: starting
14:52:01.520 INFO  Server - IPC Server listener on 0: starting
14:52:01.521 INFO  DataNode - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:33789
14:52:01.521 INFO  Storage - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2)
14:52:01.522 INFO  Storage - Lock on /tmp/minicluster_storage1121274261722281395/data/data1/in_use.lock acquired by nodename 3348@runnervmnay03
14:52:01.522 INFO  Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage1121274261722281395/data/data1 is not formatted for namespace 1998660978. Formatting...
14:52:01.522 INFO  Storage - Generated new storageID DS-54f4481a-9859-4f8a-84ab-9a094192389b for directory /tmp/minicluster_storage1121274261722281395/data/data1 
14:52:01.523 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=datanodeReport	src=null	dst=null	perm=null	proto=rpc
14:52:01.523 INFO  MiniDFSCluster - dnInfo.length != numDataNodes
14:52:01.523 INFO  MiniDFSCluster - Waiting for cluster to become active
14:52:01.524 INFO  Storage - Lock on /tmp/minicluster_storage1121274261722281395/data/data2/in_use.lock acquired by nodename 3348@runnervmnay03
14:52:01.524 INFO  Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage1121274261722281395/data/data2 is not formatted for namespace 1998660978. Formatting...
14:52:01.524 INFO  Storage - Generated new storageID DS-774ad162-db7c-40c2-bc71-48ce58e3a3b7 for directory /tmp/minicluster_storage1121274261722281395/data/data2 
14:52:01.539 INFO  Storage - Analyzing storage directories for bpid BP-1900678018-10.1.0.125-1772635921177
14:52:01.539 INFO  Storage - Locking is disabled for /tmp/minicluster_storage1121274261722281395/data/data1/current/BP-1900678018-10.1.0.125-1772635921177
14:52:01.539 INFO  Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage1121274261722281395/data/data1 and block pool id BP-1900678018-10.1.0.125-1772635921177 is not formatted. Formatting ...
14:52:01.539 INFO  Storage - Formatting block pool BP-1900678018-10.1.0.125-1772635921177 directory /tmp/minicluster_storage1121274261722281395/data/data1/current/BP-1900678018-10.1.0.125-1772635921177/current
14:52:01.552 INFO  Storage - Analyzing storage directories for bpid BP-1900678018-10.1.0.125-1772635921177
14:52:01.553 INFO  Storage - Locking is disabled for /tmp/minicluster_storage1121274261722281395/data/data2/current/BP-1900678018-10.1.0.125-1772635921177
14:52:01.553 INFO  Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage1121274261722281395/data/data2 and block pool id BP-1900678018-10.1.0.125-1772635921177 is not formatted. Formatting ...
14:52:01.553 INFO  Storage - Formatting block pool BP-1900678018-10.1.0.125-1772635921177 directory /tmp/minicluster_storage1121274261722281395/data/data2/current/BP-1900678018-10.1.0.125-1772635921177/current
14:52:01.554 INFO  DataNode - Setting up storage: nsid=1998660978;bpid=BP-1900678018-10.1.0.125-1772635921177;lv=-57;nsInfo=lv=-66;cid=testClusterID;nsid=1998660978;c=1772635921177;bpid=BP-1900678018-10.1.0.125-1772635921177;dnuuid=null
14:52:01.555 INFO  DataNode - Generated and persisted new Datanode UUID 52d94e91-02f7-44c9-8ba0-6b499b7db073
14:52:01.555 INFO  FsDatasetImpl - The datanode lock is a read write lock
14:52:01.556 INFO  FsDatasetImpl - Added new volume: DS-54f4481a-9859-4f8a-84ab-9a094192389b
14:52:01.556 INFO  FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage1121274261722281395/data/data1, StorageType: DISK
14:52:01.557 INFO  FsDatasetImpl - Added new volume: DS-774ad162-db7c-40c2-bc71-48ce58e3a3b7
14:52:01.557 INFO  FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage1121274261722281395/data/data2, StorageType: DISK
14:52:01.557 INFO  MemoryMappableBlockLoader - Initializing cache loader: MemoryMappableBlockLoader.
14:52:01.557 INFO  FsDatasetImpl - Registered FSDatasetState MBean
14:52:01.558 INFO  FsDatasetImpl - Adding block pool BP-1900678018-10.1.0.125-1772635921177
14:52:01.558 INFO  FsDatasetImpl - Scanning block pool BP-1900678018-10.1.0.125-1772635921177 on volume /tmp/minicluster_storage1121274261722281395/data/data1...
14:52:01.558 WARN  FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage1121274261722281395/data/data1/current/BP-1900678018-10.1.0.125-1772635921177/current, will proceed with Du for space computation calculation, 
14:52:01.559 INFO  FsDatasetImpl - Scanning block pool BP-1900678018-10.1.0.125-1772635921177 on volume /tmp/minicluster_storage1121274261722281395/data/data2...
14:52:01.559 WARN  FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage1121274261722281395/data/data2/current/BP-1900678018-10.1.0.125-1772635921177/current, will proceed with Du for space computation calculation, 
14:52:01.573 INFO  FsDatasetImpl - Time taken to scan block pool BP-1900678018-10.1.0.125-1772635921177 on /tmp/minicluster_storage1121274261722281395/data/data2: 14ms
14:52:01.573 INFO  FsDatasetImpl - Time taken to scan block pool BP-1900678018-10.1.0.125-1772635921177 on /tmp/minicluster_storage1121274261722281395/data/data1: 15ms
14:52:01.573 INFO  FsDatasetImpl - Total time to scan all replicas for block pool BP-1900678018-10.1.0.125-1772635921177: 15ms
14:52:01.573 INFO  FsDatasetImpl - Adding replicas to map for block pool BP-1900678018-10.1.0.125-1772635921177 on volume /tmp/minicluster_storage1121274261722281395/data/data1...
14:52:01.573 INFO  BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage1121274261722281395/data/data1/current/BP-1900678018-10.1.0.125-1772635921177/current/replicas doesn't exist 
14:52:01.573 INFO  FsDatasetImpl - Adding replicas to map for block pool BP-1900678018-10.1.0.125-1772635921177 on volume /tmp/minicluster_storage1121274261722281395/data/data2...
14:52:01.573 INFO  BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage1121274261722281395/data/data2/current/BP-1900678018-10.1.0.125-1772635921177/current/replicas doesn't exist 
14:52:01.573 INFO  FsDatasetImpl - Time to add replicas to map for block pool BP-1900678018-10.1.0.125-1772635921177 on volume /tmp/minicluster_storage1121274261722281395/data/data1: 1ms
14:52:01.574 INFO  FsDatasetImpl - Time to add replicas to map for block pool BP-1900678018-10.1.0.125-1772635921177 on volume /tmp/minicluster_storage1121274261722281395/data/data2: 0ms
14:52:01.574 INFO  FsDatasetImpl - Total time to add all replicas to map for block pool BP-1900678018-10.1.0.125-1772635921177: 1ms
14:52:01.574 INFO  ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage1121274261722281395/data/data1
14:52:01.574 INFO  DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage1121274261722281395/data/data1
14:52:01.574 INFO  ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage1121274261722281395/data/data2
14:52:01.574 INFO  DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage1121274261722281395/data/data2
14:52:01.575 INFO  VolumeScanner - Now scanning bpid BP-1900678018-10.1.0.125-1772635921177 on volume /tmp/minicluster_storage1121274261722281395/data/data1
14:52:01.575 WARN  DirectoryScanner - dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value above 1000 ms/sec. Assuming default value of -1
14:52:01.575 INFO  DirectoryScanner - Periodic Directory Tree Verification scan starting in 7057602ms with interval of 21600000ms and throttle limit of -1ms/s
14:52:01.575 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage1121274261722281395/data/data1, DS-54f4481a-9859-4f8a-84ab-9a094192389b): finished scanning block pool BP-1900678018-10.1.0.125-1772635921177
14:52:01.575 INFO  VolumeScanner - Now scanning bpid BP-1900678018-10.1.0.125-1772635921177 on volume /tmp/minicluster_storage1121274261722281395/data/data2
14:52:01.575 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage1121274261722281395/data/data2, DS-774ad162-db7c-40c2-bc71-48ce58e3a3b7): finished scanning block pool BP-1900678018-10.1.0.125-1772635921177
14:52:01.575 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage1121274261722281395/data/data1, DS-54f4481a-9859-4f8a-84ab-9a094192389b): no suitable block pools found to scan.  Waiting 1814400000 ms.
14:52:01.575 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage1121274261722281395/data/data2, DS-774ad162-db7c-40c2-bc71-48ce58e3a3b7): no suitable block pools found to scan.  Waiting 1814400000 ms.
14:52:01.575 INFO  DataNode - Block pool BP-1900678018-10.1.0.125-1772635921177 (Datanode Uuid 52d94e91-02f7-44c9-8ba0-6b499b7db073) service to localhost/127.0.0.1:33789 beginning handshake with NN
14:52:01.576 INFO  StateChange - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:33523, datanodeUuid=52d94e91-02f7-44c9-8ba0-6b499b7db073, infoPort=35609, infoSecurePort=0, ipcPort=44887, storageInfo=lv=-57;cid=testClusterID;nsid=1998660978;c=1772635921177) storage 52d94e91-02f7-44c9-8ba0-6b499b7db073
14:52:01.576 INFO  NetworkTopology - Adding a new node: /default-rack/127.0.0.1:33523
14:52:01.576 INFO  BlockReportLeaseManager - Registered DN 52d94e91-02f7-44c9-8ba0-6b499b7db073 (127.0.0.1:33523).
14:52:01.576 INFO  DataNode - Block pool BP-1900678018-10.1.0.125-1772635921177 (Datanode Uuid 52d94e91-02f7-44c9-8ba0-6b499b7db073) service to localhost/127.0.0.1:33789 successfully registered with NN
14:52:01.576 INFO  DataNode - For namenode localhost/127.0.0.1:33789 using BLOCKREPORT_INTERVAL of 21600000msecs CACHEREPORT_INTERVAL of 10000msecs Initial delay: 0msecs; heartBeatInterval=3000
14:52:01.577 INFO  DataNode - Starting IBR Task Handler.
14:52:01.577 INFO  DatanodeDescriptor - Adding new storage ID DS-54f4481a-9859-4f8a-84ab-9a094192389b for DN 127.0.0.1:33523
14:52:01.577 INFO  DatanodeDescriptor - Adding new storage ID DS-774ad162-db7c-40c2-bc71-48ce58e3a3b7 for DN 127.0.0.1:33523
14:52:01.577 INFO  DataNode - After receiving heartbeat response, updating state of namenode localhost:33789 to active
14:52:01.578 INFO  BlockStateChange - BLOCK* processReport 0x8db67690ead92703 with lease ID 0x91b47838560c5590: Processing first storage report for DS-774ad162-db7c-40c2-bc71-48ce58e3a3b7 from datanode DatanodeRegistration(127.0.0.1:33523, datanodeUuid=52d94e91-02f7-44c9-8ba0-6b499b7db073, infoPort=35609, infoSecurePort=0, ipcPort=44887, storageInfo=lv=-57;cid=testClusterID;nsid=1998660978;c=1772635921177)
14:52:01.578 INFO  BlockStateChange - BLOCK* processReport 0x8db67690ead92703 with lease ID 0x91b47838560c5590: from storage DS-774ad162-db7c-40c2-bc71-48ce58e3a3b7 node DatanodeRegistration(127.0.0.1:33523, datanodeUuid=52d94e91-02f7-44c9-8ba0-6b499b7db073, infoPort=35609, infoSecurePort=0, ipcPort=44887, storageInfo=lv=-57;cid=testClusterID;nsid=1998660978;c=1772635921177), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0
14:52:01.578 INFO  BlockStateChange - BLOCK* processReport 0x8db67690ead92703 with lease ID 0x91b47838560c5590: Processing first storage report for DS-54f4481a-9859-4f8a-84ab-9a094192389b from datanode DatanodeRegistration(127.0.0.1:33523, datanodeUuid=52d94e91-02f7-44c9-8ba0-6b499b7db073, infoPort=35609, infoSecurePort=0, ipcPort=44887, storageInfo=lv=-57;cid=testClusterID;nsid=1998660978;c=1772635921177)
14:52:01.578 INFO  BlockStateChange - BLOCK* processReport 0x8db67690ead92703 with lease ID 0x91b47838560c5590: from storage DS-54f4481a-9859-4f8a-84ab-9a094192389b node DatanodeRegistration(127.0.0.1:33523, datanodeUuid=52d94e91-02f7-44c9-8ba0-6b499b7db073, infoPort=35609, infoSecurePort=0, ipcPort=44887, storageInfo=lv=-57;cid=testClusterID;nsid=1998660978;c=1772635921177), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
14:52:01.578 INFO  DataNode - Successfully sent block report 0x8db67690ead92703 with lease ID 0x91b47838560c5590 to namenode: localhost/127.0.0.1:33789,  containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 0 msecs to generate and 1 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
14:52:01.578 INFO  DataNode - Got finalize command for block pool BP-1900678018-10.1.0.125-1772635921177
14:52:01.624 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=datanodeReport	src=null	dst=null	perm=null	proto=rpc
14:52:01.624 INFO  MiniDFSCluster - Cluster is active
14:52:01.626 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
14:52:01.627 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/user/runner/hdfs.bam	dst=null	perm=runner:supergroup:rw-r--r--	proto=rpc
14:52:01.629 INFO  StateChange - BLOCK* allocate blk_1073741825_1001, replicas=127.0.0.1:33523 for /user/runner/hdfs.bam
14:52:01.630 INFO  DataNode - Receiving BP-1900678018-10.1.0.125-1772635921177:blk_1073741825_1001 src: /127.0.0.1:59936 dest: /127.0.0.1:33523
14:52:01.632 INFO  clienttrace - src: /127.0.0.1:59936, dest: /127.0.0.1:33523, bytes: 2396, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-429487755_1, offset: 0, srvID: 52d94e91-02f7-44c9-8ba0-6b499b7db073, blockid: BP-1900678018-10.1.0.125-1772635921177:blk_1073741825_1001, duration(ns): 631901
14:52:01.632 INFO  DataNode - PacketResponder: BP-1900678018-10.1.0.125-1772635921177:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating
14:52:01.632 INFO  FSNamesystem - BLOCK* blk_1073741825_1001 is COMMITTED but not COMPLETE(numNodes= 0 <  minimum = 1) in file /user/runner/hdfs.bam
14:52:02.033 INFO  StateChange - DIR* completeFile: /user/runner/hdfs.bam is closed by DFSClient_NONMAPREDUCE_-429487755_1
14:52:02.034 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
14:52:02.035 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=create	src=/user/runner/hdfs.bai	dst=null	perm=runner:supergroup:rw-r--r--	proto=rpc
14:52:02.036 INFO  StateChange - BLOCK* allocate blk_1073741826_1002, replicas=127.0.0.1:33523 for /user/runner/hdfs.bai
14:52:02.037 INFO  DataNode - Receiving BP-1900678018-10.1.0.125-1772635921177:blk_1073741826_1002 src: /127.0.0.1:59938 dest: /127.0.0.1:33523
14:52:02.038 INFO  clienttrace - src: /127.0.0.1:59938, dest: /127.0.0.1:33523, bytes: 656, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-429487755_1, offset: 0, srvID: 52d94e91-02f7-44c9-8ba0-6b499b7db073, blockid: BP-1900678018-10.1.0.125-1772635921177:blk_1073741826_1002, duration(ns): 484043
14:52:02.038 INFO  DataNode - PacketResponder: BP-1900678018-10.1.0.125-1772635921177:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating
14:52:02.038 INFO  FSNamesystem - BLOCK* blk_1073741826_1002 is COMMITTED but not COMPLETE(numNodes= 0 <  minimum = 1) in file /user/runner/hdfs.bai
14:52:02.439 INFO  StateChange - DIR* completeFile: /user/runner/hdfs.bai is closed by DFSClient_NONMAPREDUCE_-429487755_1
14:52:02.442 INFO  MemoryStore - Block broadcast_612 stored as values in memory (estimated size 298.3 KiB, free 1919.7 MiB)
14:52:02.449 INFO  MemoryStore - Block broadcast_612_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
14:52:02.449 INFO  BlockManagerInfo - Added broadcast_612_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1920.0 MiB)
14:52:02.449 INFO  SparkContext - Created broadcast 612 from newAPIHadoopFile at PathSplitSource.java:96
14:52:02.470 INFO  FileInputFormat - Total input files to process : 1
14:52:02.507 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:212
14:52:02.507 INFO  DAGScheduler - Got job 226 (collect at ReadsSparkSourceUnitTest.java:212) with 1 output partitions
14:52:02.507 INFO  DAGScheduler - Final stage: ResultStage 290 (collect at ReadsSparkSourceUnitTest.java:212)
14:52:02.507 INFO  DAGScheduler - Parents of final stage: List()
14:52:02.507 INFO  DAGScheduler - Missing parents: List()
14:52:02.508 INFO  DAGScheduler - Submitting ResultStage 290 (MapPartitionsRDD[1476] at filter at ReadsSparkSource.java:96), which has no missing parents
14:52:02.525 INFO  MemoryStore - Block broadcast_613 stored as values in memory (estimated size 426.4 KiB, free 1919.2 MiB)
14:52:02.526 INFO  MemoryStore - Block broadcast_613_piece0 stored as bytes in memory (estimated size 154.0 KiB, free 1919.1 MiB)
14:52:02.527 INFO  BlockManagerInfo - Added broadcast_613_piece0 in memory on localhost:44923 (size: 154.0 KiB, free: 1919.8 MiB)
14:52:02.527 INFO  SparkContext - Created broadcast 613 from broadcast at DAGScheduler.scala:1580
14:52:02.527 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 290 (MapPartitionsRDD[1476] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:52:02.527 INFO  TaskSchedulerImpl - Adding task set 290.0 with 1 tasks resource profile 0
14:52:02.528 INFO  TaskSetManager - Starting task 0.0 in stage 290.0 (TID 377) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7899 bytes) 
14:52:02.528 INFO  Executor - Running task 0.0 in stage 290.0 (TID 377)
14:52:02.562 INFO  NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/engine/spark/datasources/ReadsSparkSource/hdfs_file_test.bam:0+2396
14:52:02.565 INFO  Executor - Finished task 0.0 in stage 290.0 (TID 377). 5288 bytes result sent to driver
14:52:02.565 INFO  TaskSetManager - Finished task 0.0 in stage 290.0 (TID 377) in 37 ms on localhost (executor driver) (1/1)
14:52:02.565 INFO  TaskSchedulerImpl - Removed TaskSet 290.0, whose tasks have all completed, from pool 
14:52:02.566 INFO  DAGScheduler - ResultStage 290 (collect at ReadsSparkSourceUnitTest.java:212) finished in 0.058 s
14:52:02.566 INFO  DAGScheduler - Job 226 is finished. Cancelling potential speculative or zombie tasks for this job
14:52:02.566 INFO  TaskSchedulerImpl - Killing all running tasks in stage 290: Stage finished
14:52:02.566 INFO  DAGScheduler - Job 226 finished: collect at ReadsSparkSourceUnitTest.java:212, took 0.058771 s
14:52:02.567 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
14:52:02.567 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
14:52:02.568 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
14:52:02.568 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
14:52:02.569 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam.bai	dst=null	perm=null	proto=rpc
14:52:02.569 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
14:52:02.569 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
14:52:02.570 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
14:52:02.572 WARN  DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
14:52:02.573 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
14:52:02.573 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
14:52:02.574 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam.sbi	dst=null	perm=null	proto=rpc
14:52:02.575 INFO  MemoryStore - Block broadcast_614 stored as values in memory (estimated size 298.2 KiB, free 1918.8 MiB)
14:52:02.581 INFO  MemoryStore - Block broadcast_614_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.8 MiB)
14:52:02.581 INFO  BlockManagerInfo - Added broadcast_614_piece0 in memory on localhost:44923 (size: 50.2 KiB, free: 1919.8 MiB)
14:52:02.581 INFO  SparkContext - Created broadcast 614 from newAPIHadoopFile at PathSplitSource.java:96
14:52:02.602 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
14:52:02.602 INFO  FileInputFormat - Total input files to process : 1
14:52:02.603 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
14:52:02.640 INFO  SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:213
14:52:02.640 INFO  DAGScheduler - Got job 227 (collect at ReadsSparkSourceUnitTest.java:213) with 1 output partitions
14:52:02.640 INFO  DAGScheduler - Final stage: ResultStage 291 (collect at ReadsSparkSourceUnitTest.java:213)
14:52:02.640 INFO  DAGScheduler - Parents of final stage: List()
14:52:02.640 INFO  DAGScheduler - Missing parents: List()
14:52:02.640 INFO  DAGScheduler - Submitting ResultStage 291 (MapPartitionsRDD[1483] at filter at ReadsSparkSource.java:96), which has no missing parents
14:52:02.657 INFO  MemoryStore - Block broadcast_615 stored as values in memory (estimated size 426.4 KiB, free 1918.3 MiB)
14:52:02.658 INFO  MemoryStore - Block broadcast_615_piece0 stored as bytes in memory (estimated size 154.0 KiB, free 1918.2 MiB)
14:52:02.659 INFO  BlockManagerInfo - Added broadcast_615_piece0 in memory on localhost:44923 (size: 154.0 KiB, free: 1919.6 MiB)
14:52:02.659 INFO  SparkContext - Created broadcast 615 from broadcast at DAGScheduler.scala:1580
14:52:02.659 INFO  DAGScheduler - Submitting 1 missing tasks from ResultStage 291 (MapPartitionsRDD[1483] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
14:52:02.659 INFO  TaskSchedulerImpl - Adding task set 291.0 with 1 tasks resource profile 0
14:52:02.659 INFO  TaskSetManager - Starting task 0.0 in stage 291.0 (TID 378) (localhost, executor driver, partition 0, ANY, 7796 bytes) 
14:52:02.660 INFO  Executor - Running task 0.0 in stage 291.0 (TID 378)
14:52:02.692 INFO  NewHadoopRDD - Input split: hdfs://localhost:33789/user/runner/hdfs.bam:0+2396
14:52:02.692 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
14:52:02.693 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
14:52:02.694 WARN  DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
14:52:02.694 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
14:52:02.695 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
14:52:02.695 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam.bai	dst=null	perm=null	proto=rpc
14:52:02.696 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
14:52:02.696 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
14:52:02.697 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
14:52:02.698 WARN  DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
14:52:02.700 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
14:52:02.700 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
14:52:02.701 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
14:52:02.702 WARN  DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
14:52:02.702 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
14:52:02.703 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bam	dst=null	perm=null	proto=rpc
14:52:02.703 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bam.bai	dst=null	perm=null	proto=rpc
14:52:02.704 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
14:52:02.704 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
14:52:02.704 INFO  audit - allowed=true	ugi=runner (auth:SIMPLE)	ip=/127.0.0.1	cmd=open	src=/user/runner/hdfs.bai	dst=null	perm=null	proto=rpc
14:52:02.706 WARN  DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
14:52:02.707 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
14:52:02.707 WARN  DFSUtil - Unexpected value for data transfer bytes=664 duration=0
14:52:02.708 INFO  Executor - Finished task 0.0 in stage 291.0 (TID 378). 5288 bytes result sent to driver
14:52:02.712 INFO  TaskSetManager - Finished task 0.0 in stage 291.0 (TID 378) in 53 ms on localhost (executor driver) (1/1)
14:52:02.712 INFO  TaskSchedulerImpl - Removed TaskSet 291.0, whose tasks have all completed, from pool 
14:52:02.712 INFO  BlockManagerInfo - Removed broadcast_613_piece0 on localhost:44923 in memory (size: 154.0 KiB, free: 1919.8 MiB)
14:52:02.713 INFO  DAGScheduler - ResultStage 291 (collect at ReadsSparkSourceUnitTest.java:213) finished in 0.073 s
14:52:02.713 INFO  DAGScheduler - Job 227 is finished. Cancelling potential speculative or zombie tasks for this job
14:52:02.713 INFO  TaskSchedulerImpl - Killing all running tasks in stage 291: Stage finished
14:52:02.713 INFO  BlockManagerInfo - Removed broadcast_612_piece0 on localhost:44923 in memory (size: 50.2 KiB, free: 1919.8 MiB)
14:52:02.713 INFO  DAGScheduler - Job 227 finished: collect at ReadsSparkSourceUnitTest.java:213, took 0.073429 s
14:52:02.714 INFO  MiniDFSCluster - Shutting down the Mini HDFS Cluster
14:52:02.714 INFO  MiniDFSCluster - Shutting down DataNode 0
14:52:02.714 INFO  DirectoryScanner - Shutdown has been called
14:52:02.714 INFO  DataNode - Closing all peers.
14:52:02.714 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage1121274261722281395/data/data2, DS-774ad162-db7c-40c2-bc71-48ce58e3a3b7) exiting.
14:52:02.714 INFO  VolumeScanner - VolumeScanner(/tmp/minicluster_storage1121274261722281395/data/data1, DS-54f4481a-9859-4f8a-84ab-9a094192389b) exiting.
14:52:02.717 INFO  ContextHandler - Stopped o.e.j.w.WebAppContext@14e7fc8a{datanode,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
14:52:02.718 INFO  AbstractConnector - Stopped ServerConnector@3d26544d{HTTP/1.1, (http/1.1)}{localhost:0}
14:52:02.718 INFO  session - node0 Stopped scavenging
14:52:02.718 INFO  ContextHandler - Stopped o.e.j.s.ServletContextHandler@1e74d9f4{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
14:52:02.718 INFO  DataNode - Waiting up to 30 seconds for transfer threads to complete
14:52:02.719 INFO  Server - Stopping server on 44887
14:52:02.719 INFO  Server - Stopping IPC Server listener on 0
14:52:02.719 INFO  Server - Stopping IPC Server Responder
14:52:02.720 WARN  IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
14:52:02.720 WARN  DataNode - Ending block pool service for: Block pool BP-1900678018-10.1.0.125-1772635921177 (Datanode Uuid 52d94e91-02f7-44c9-8ba0-6b499b7db073) service to localhost/127.0.0.1:33789
14:52:02.720 INFO  DataNode - Removed Block pool BP-1900678018-10.1.0.125-1772635921177 (Datanode Uuid 52d94e91-02f7-44c9-8ba0-6b499b7db073)
14:52:02.720 ERROR DataNode - Command processor encountered interrupt and exit.
14:52:02.720 INFO  FsDatasetImpl - Removing block pool BP-1900678018-10.1.0.125-1772635921177
14:52:02.720 WARN  DataNode - Ending command processor service for: Thread[Command processor,5,main]
14:52:02.720 WARN  IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
14:52:02.721 WARN  CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
14:52:02.721 WARN  CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
14:52:02.721 INFO  FsDatasetAsyncDiskService - Shutting down all async disk service threads
14:52:02.721 INFO  FsDatasetAsyncDiskService - All async disk service threads have been shut down
14:52:02.721 INFO  RamDiskAsyncLazyPersistService - Shutting down all async lazy persist service threads
14:52:02.721 INFO  RamDiskAsyncLazyPersistService - All async lazy persist service threads have been shut down
14:52:02.721 INFO  DataNode - Shutdown complete.
14:52:02.721 INFO  MiniDFSCluster - Shutting down the namenode
14:52:02.721 INFO  FSNamesystem - Stopping services started for active state
14:52:02.721 INFO  FSNamesystem - NameNodeEditLogRoller was interrupted, exiting
14:52:02.721 INFO  FSEditLog - Ending log segment 1, 13
14:52:02.721 INFO  FSNamesystem - LazyPersistFileScrubber was interrupted, exiting
14:52:02.721 INFO  FSEditLog - Number of transactions: 14 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 2 Number of syncs: 13 SyncTimes(ms): 2 1 
14:52:02.722 INFO  FileJournalManager - Finalizing edits file /tmp/minicluster_storage1121274261722281395/name-0-1/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage1121274261722281395/name-0-1/current/edits_0000000000000000001-0000000000000000014
14:52:02.722 INFO  FileJournalManager - Finalizing edits file /tmp/minicluster_storage1121274261722281395/name-0-2/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage1121274261722281395/name-0-2/current/edits_0000000000000000001-0000000000000000014
14:52:02.722 INFO  FSEditLog - FSEditLogAsync was interrupted, exiting
14:52:02.722 INFO  CacheReplicationMonitor - Shutting down CacheReplicationMonitor
14:52:02.723 INFO  Server - Stopping server on 33789
14:52:02.723 INFO  Server - Stopping IPC Server listener on 0
14:52:02.723 INFO  Server - Stopping IPC Server Responder
14:52:02.724 INFO  BlockManager - Stopping MarkedDeleteBlockScrubber.
14:52:02.724 INFO  BlockManager - Stopping RedundancyMonitor.
14:52:02.728 INFO  FSNamesystem - Stopping services started for active state
14:52:02.728 INFO  FSNamesystem - Stopping services started for standby state
14:52:02.728 INFO  ContextHandler - Stopped o.e.j.w.WebAppContext@571eced6{hdfs,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
14:52:02.729 INFO  AbstractConnector - Stopped ServerConnector@4e71f0b6{HTTP/1.1, (http/1.1)}{localhost:0}
14:52:02.729 INFO  session - node0 Stopped scavenging
14:52:02.729 INFO  ContextHandler - Stopped o.e.j.s.ServletContextHandler@61d0cda7{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
14:52:02.730 INFO  MetricsSystemImpl - Stopping DataNode metrics system...
14:52:02.730 INFO  MetricsSystemImpl - DataNode metrics system stopped.
14:52:02.730 INFO  MetricsSystemImpl - DataNode metrics system shutdown complete.