20:14:33.220 INFO MemoryStore - Block broadcast_536 stored as values in memory (estimated size 297.9 KiB, free 1918.3 MiB)
20:14:33.226 INFO MemoryStore - Block broadcast_536_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.2 MiB)
20:14:33.226 INFO BlockManagerInfo - Added broadcast_536_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.6 MiB)
20:14:33.227 INFO SparkContext - Created broadcast 536 from newAPIHadoopFile at PathSplitSource.java:96
20:14:33.256 INFO MemoryStore - Block broadcast_537 stored as values in memory (estimated size 297.9 KiB, free 1917.9 MiB)
20:14:33.262 INFO MemoryStore - Block broadcast_537_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.9 MiB)
20:14:33.262 INFO BlockManagerInfo - Added broadcast_537_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.6 MiB)
20:14:33.262 INFO SparkContext - Created broadcast 537 from newAPIHadoopFile at PathSplitSource.java:96
20:14:33.286 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:14:33.286 INFO DAGScheduler - Got job 201 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:14:33.286 INFO DAGScheduler - Final stage: ResultStage 265 (collect at ReadsSparkSourceUnitTest.java:111)
20:14:33.286 INFO DAGScheduler - Parents of final stage: List()
20:14:33.286 INFO DAGScheduler - Missing parents: List()
20:14:33.286 INFO DAGScheduler - Submitting ResultStage 265 (ParallelCollectionRDD[1270] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:14:33.287 INFO MemoryStore - Block broadcast_538 stored as values in memory (estimated size 3.0 KiB, free 1917.9 MiB)
20:14:33.287 INFO MemoryStore - Block broadcast_538_piece0 stored as bytes in memory (estimated size 1777.0 B, free 1917.9 MiB)
20:14:33.287 INFO BlockManagerInfo - Added broadcast_538_piece0 in memory on localhost:35739 (size: 1777.0 B, free: 1919.6 MiB)
20:14:33.288 INFO SparkContext - Created broadcast 538 from broadcast at DAGScheduler.scala:1580
20:14:33.288 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 265 (ParallelCollectionRDD[1270] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:14:33.288 INFO TaskSchedulerImpl - Adding task set 265.0 with 4 tasks resource profile 0
20:14:33.289 INFO TaskSetManager - Starting task 0.0 in stage 265.0 (TID 321) (localhost, executor driver, partition 0, PROCESS_LOCAL, 47080 bytes)
20:14:33.289 INFO TaskSetManager - Starting task 1.0 in stage 265.0 (TID 322) (localhost, executor driver, partition 1, PROCESS_LOCAL, 47194 bytes)
20:14:33.289 INFO TaskSetManager - Starting task 2.0 in stage 265.0 (TID 323) (localhost, executor driver, partition 2, PROCESS_LOCAL, 47105 bytes)
20:14:33.289 INFO TaskSetManager - Starting task 3.0 in stage 265.0 (TID 324) (localhost, executor driver, partition 3, PROCESS_LOCAL, 47530 bytes)
20:14:33.290 INFO Executor - Running task 2.0 in stage 265.0 (TID 323)
20:14:33.291 INFO Executor - Finished task 2.0 in stage 265.0 (TID 323). 40337 bytes result sent to driver
20:14:33.291 INFO Executor - Running task 0.0 in stage 265.0 (TID 321)
20:14:33.292 INFO Executor - Finished task 0.0 in stage 265.0 (TID 321). 40312 bytes result sent to driver
20:14:33.293 INFO Executor - Running task 1.0 in stage 265.0 (TID 322)
20:14:33.294 INFO Executor - Finished task 1.0 in stage 265.0 (TID 322). 40426 bytes result sent to driver
20:14:33.294 INFO Executor - Running task 3.0 in stage 265.0 (TID 324)
20:14:33.294 INFO TaskSetManager - Finished task 2.0 in stage 265.0 (TID 323) in 5 ms on localhost (executor driver) (1/4)
20:14:33.295 INFO TaskSetManager - Finished task 0.0 in stage 265.0 (TID 321) in 7 ms on localhost (executor driver) (2/4)
20:14:33.295 INFO TaskSetManager - Finished task 1.0 in stage 265.0 (TID 322) in 6 ms on localhost (executor driver) (3/4)
20:14:33.296 INFO Executor - Finished task 3.0 in stage 265.0 (TID 324). 40805 bytes result sent to driver
20:14:33.296 INFO TaskSetManager - Finished task 3.0 in stage 265.0 (TID 324) in 7 ms on localhost (executor driver) (4/4)
20:14:33.297 INFO TaskSchedulerImpl - Removed TaskSet 265.0, whose tasks have all completed, from pool
20:14:33.297 INFO DAGScheduler - ResultStage 265 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.010 s
20:14:33.297 INFO DAGScheduler - Job 201 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:33.297 INFO TaskSchedulerImpl - Killing all running tasks in stage 265: Stage finished
20:14:33.297 INFO DAGScheduler - Job 201 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.011006 s
20:14:33.298 INFO FileInputFormat - Total input files to process : 1
20:14:33.337 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:14:33.338 INFO DAGScheduler - Got job 202 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:14:33.338 INFO DAGScheduler - Final stage: ResultStage 266 (collect at ReadsSparkSourceUnitTest.java:112)
20:14:33.338 INFO DAGScheduler - Parents of final stage: List()
20:14:33.338 INFO DAGScheduler - Missing parents: List()
20:14:33.338 INFO DAGScheduler - Submitting ResultStage 266 (MapPartitionsRDD[1277] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:33.357 INFO MemoryStore - Block broadcast_539 stored as values in memory (estimated size 426.2 KiB, free 1917.5 MiB)
20:14:33.358 INFO MemoryStore - Block broadcast_539_piece0 stored as bytes in memory (estimated size 153.6 KiB, free 1917.3 MiB)
20:14:33.358 INFO BlockManagerInfo - Added broadcast_539_piece0 in memory on localhost:35739 (size: 153.6 KiB, free: 1919.4 MiB)
20:14:33.358 INFO SparkContext - Created broadcast 539 from broadcast at DAGScheduler.scala:1580
20:14:33.359 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 266 (MapPartitionsRDD[1277] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:33.359 INFO TaskSchedulerImpl - Adding task set 266.0 with 1 tasks resource profile 0
20:14:33.359 INFO TaskSetManager - Starting task 0.0 in stage 266.0 (TID 325) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes)
20:14:33.360 INFO Executor - Running task 0.0 in stage 266.0 (TID 325)
20:14:33.389 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
20:14:33.392 INFO Executor - Finished task 0.0 in stage 266.0 (TID 325). 159572 bytes result sent to driver
20:14:33.393 INFO TaskSetManager - Finished task 0.0 in stage 266.0 (TID 325) in 34 ms on localhost (executor driver) (1/1)
20:14:33.393 INFO TaskSchedulerImpl - Removed TaskSet 266.0, whose tasks have all completed, from pool
20:14:33.393 INFO DAGScheduler - ResultStage 266 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.055 s
20:14:33.393 INFO DAGScheduler - Job 202 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:33.393 INFO TaskSchedulerImpl - Killing all running tasks in stage 266: Stage finished
20:14:33.393 INFO DAGScheduler - Job 202 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.055807 s
20:14:33.397 INFO MemoryStore - Block broadcast_540 stored as values in memory (estimated size 298.0 KiB, free 1917.0 MiB)
20:14:33.403 INFO MemoryStore - Block broadcast_540_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.0 MiB)
20:14:33.403 INFO BlockManagerInfo - Added broadcast_540_piece0 in memory on localhost:35739 (size: 50.3 KiB, free: 1919.4 MiB)
20:14:33.403 INFO SparkContext - Created broadcast 540 from newAPIHadoopFile at PathSplitSource.java:96
20:14:33.426 INFO MemoryStore - Block broadcast_541 stored as values in memory (estimated size 298.0 KiB, free 1916.7 MiB)
20:14:33.433 INFO MemoryStore - Block broadcast_541_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1916.6 MiB)
20:14:33.433 INFO BlockManagerInfo - Added broadcast_541_piece0 in memory on localhost:35739 (size: 50.3 KiB, free: 1919.4 MiB)
20:14:33.433 INFO SparkContext - Created broadcast 541 from newAPIHadoopFile at PathSplitSource.java:96
20:14:33.456 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:14:33.456 INFO DAGScheduler - Got job 203 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:14:33.456 INFO DAGScheduler - Final stage: ResultStage 267 (collect at ReadsSparkSourceUnitTest.java:111)
20:14:33.456 INFO DAGScheduler - Parents of final stage: List()
20:14:33.456 INFO DAGScheduler - Missing parents: List()
20:14:33.456 INFO DAGScheduler - Submitting ResultStage 267 (ParallelCollectionRDD[1283] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:14:33.456 INFO MemoryStore - Block broadcast_542 stored as values in memory (estimated size 3.0 KiB, free 1916.6 MiB)
20:14:33.457 INFO MemoryStore - Block broadcast_542_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1916.6 MiB)
20:14:33.457 INFO BlockManagerInfo - Added broadcast_542_piece0 in memory on localhost:35739 (size: 1778.0 B, free: 1919.3 MiB)
20:14:33.457 INFO SparkContext - Created broadcast 542 from broadcast at DAGScheduler.scala:1580
20:14:33.457 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 267 (ParallelCollectionRDD[1283] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:14:33.457 INFO TaskSchedulerImpl - Adding task set 267.0 with 4 tasks resource profile 0
20:14:33.458 INFO TaskSetManager - Starting task 0.0 in stage 267.0 (TID 326) (localhost, executor driver, partition 0, PROCESS_LOCAL, 169886 bytes)
20:14:33.458 INFO TaskSetManager - Starting task 1.0 in stage 267.0 (TID 327) (localhost, executor driver, partition 1, PROCESS_LOCAL, 170093 bytes)
20:14:33.459 INFO TaskSetManager - Starting task 2.0 in stage 267.0 (TID 328) (localhost, executor driver, partition 2, PROCESS_LOCAL, 170081 bytes)
20:14:33.459 INFO TaskSetManager - Starting task 3.0 in stage 267.0 (TID 329) (localhost, executor driver, partition 3, PROCESS_LOCAL, 169419 bytes)
20:14:33.459 INFO Executor - Running task 2.0 in stage 267.0 (TID 328)
20:14:33.459 INFO Executor - Running task 3.0 in stage 267.0 (TID 329)
20:14:33.459 INFO Executor - Running task 1.0 in stage 267.0 (TID 327)
20:14:33.459 INFO Executor - Running task 0.0 in stage 267.0 (TID 326)
20:14:33.461 INFO Executor - Finished task 1.0 in stage 267.0 (TID 327). 163368 bytes result sent to driver
20:14:33.461 INFO Executor - Finished task 2.0 in stage 267.0 (TID 328). 163356 bytes result sent to driver
20:14:33.461 INFO Executor - Finished task 0.0 in stage 267.0 (TID 326). 163161 bytes result sent to driver
20:14:33.462 INFO Executor - Finished task 3.0 in stage 267.0 (TID 329). 162694 bytes result sent to driver
20:14:33.462 INFO TaskSetManager - Finished task 2.0 in stage 267.0 (TID 328) in 4 ms on localhost (executor driver) (1/4)
20:14:33.462 INFO TaskSetManager - Finished task 0.0 in stage 267.0 (TID 326) in 4 ms on localhost (executor driver) (2/4)
20:14:33.463 INFO TaskSetManager - Finished task 1.0 in stage 267.0 (TID 327) in 5 ms on localhost (executor driver) (3/4)
20:14:33.463 INFO TaskSetManager - Finished task 3.0 in stage 267.0 (TID 329) in 4 ms on localhost (executor driver) (4/4)
20:14:33.463 INFO TaskSchedulerImpl - Removed TaskSet 267.0, whose tasks have all completed, from pool
20:14:33.463 INFO DAGScheduler - ResultStage 267 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.007 s
20:14:33.463 INFO DAGScheduler - Job 203 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:33.463 INFO TaskSchedulerImpl - Killing all running tasks in stage 267: Stage finished
20:14:33.463 INFO DAGScheduler - Job 203 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.007323 s
20:14:33.464 INFO FileInputFormat - Total input files to process : 1
20:14:33.500 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:14:33.500 INFO DAGScheduler - Got job 204 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:14:33.500 INFO DAGScheduler - Final stage: ResultStage 268 (collect at ReadsSparkSourceUnitTest.java:112)
20:14:33.500 INFO DAGScheduler - Parents of final stage: List()
20:14:33.500 INFO DAGScheduler - Missing parents: List()
20:14:33.500 INFO DAGScheduler - Submitting ResultStage 268 (MapPartitionsRDD[1290] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:33.516 INFO MemoryStore - Block broadcast_543 stored as values in memory (estimated size 426.2 KiB, free 1916.2 MiB)
20:14:33.517 INFO MemoryStore - Block broadcast_543_piece0 stored as bytes in memory (estimated size 153.6 KiB, free 1916.1 MiB)
20:14:33.518 INFO BlockManagerInfo - Added broadcast_543_piece0 in memory on localhost:35739 (size: 153.6 KiB, free: 1919.2 MiB)
20:14:33.518 INFO SparkContext - Created broadcast 543 from broadcast at DAGScheduler.scala:1580
20:14:33.518 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 268 (MapPartitionsRDD[1290] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:33.518 INFO TaskSchedulerImpl - Adding task set 268.0 with 1 tasks resource profile 0
20:14:33.518 INFO TaskSetManager - Starting task 0.0 in stage 268.0 (TID 330) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7883 bytes)
20:14:33.519 INFO Executor - Running task 0.0 in stage 268.0 (TID 330)
20:14:33.547 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam:0+221635
20:14:33.557 INFO Executor - Finished task 0.0 in stage 268.0 (TID 330). 650141 bytes result sent to driver
20:14:33.559 INFO TaskSetManager - Finished task 0.0 in stage 268.0 (TID 330) in 41 ms on localhost (executor driver) (1/1)
20:14:33.559 INFO TaskSchedulerImpl - Removed TaskSet 268.0, whose tasks have all completed, from pool
20:14:33.559 INFO DAGScheduler - ResultStage 268 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.059 s
20:14:33.559 INFO DAGScheduler - Job 204 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:33.559 INFO TaskSchedulerImpl - Killing all running tasks in stage 268: Stage finished
20:14:33.559 INFO DAGScheduler - Job 204 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.059405 s
20:14:33.561 INFO MemoryStore - Block broadcast_544 stored as values in memory (estimated size 298.0 KiB, free 1915.8 MiB)
20:14:33.568 INFO BlockManagerInfo - Removed broadcast_532_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.2 MiB)
20:14:33.568 INFO BlockManagerInfo - Removed broadcast_534_piece0 on localhost:35739 in memory (size: 153.6 KiB, free: 1919.4 MiB)
20:14:33.568 INFO BlockManagerInfo - Removed broadcast_535_piece0 on localhost:35739 in memory (size: 54.5 KiB, free: 1919.5 MiB)
20:14:33.569 INFO BlockManagerInfo - Removed broadcast_543_piece0 on localhost:35739 in memory (size: 153.6 KiB, free: 1919.6 MiB)
20:14:33.569 INFO BlockManagerInfo - Removed broadcast_541_piece0 on localhost:35739 in memory (size: 50.3 KiB, free: 1919.7 MiB)
20:14:33.569 INFO BlockManagerInfo - Removed broadcast_539_piece0 on localhost:35739 in memory (size: 153.6 KiB, free: 1919.8 MiB)
20:14:33.570 INFO BlockManagerInfo - Removed broadcast_531_piece0 on localhost:35739 in memory (size: 233.0 B, free: 1919.8 MiB)
20:14:33.570 INFO BlockManagerInfo - Removed broadcast_525_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.8 MiB)
20:14:33.570 INFO BlockManagerInfo - Removed broadcast_538_piece0 on localhost:35739 in memory (size: 1777.0 B, free: 1919.9 MiB)
20:14:33.571 INFO BlockManagerInfo - Removed broadcast_542_piece0 on localhost:35739 in memory (size: 1778.0 B, free: 1919.9 MiB)
20:14:33.571 INFO BlockManagerInfo - Removed broadcast_537_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.9 MiB)
20:14:33.571 INFO BlockManagerInfo - Removed broadcast_540_piece0 on localhost:35739 in memory (size: 50.3 KiB, free: 1920.0 MiB)
20:14:33.572 INFO BlockManagerInfo - Removed broadcast_536_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1920.0 MiB)
20:14:33.573 INFO MemoryStore - Block broadcast_544_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.7 MiB)
20:14:33.573 INFO BlockManagerInfo - Added broadcast_544_piece0 in memory on localhost:35739 (size: 50.3 KiB, free: 1920.0 MiB)
20:14:33.574 INFO SparkContext - Created broadcast 544 from newAPIHadoopFile at PathSplitSource.java:96
20:14:33.598 INFO MemoryStore - Block broadcast_545 stored as values in memory (estimated size 298.0 KiB, free 1919.4 MiB)
20:14:33.604 INFO MemoryStore - Block broadcast_545_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.3 MiB)
20:14:33.604 INFO BlockManagerInfo - Added broadcast_545_piece0 in memory on localhost:35739 (size: 50.3 KiB, free: 1919.9 MiB)
20:14:33.604 INFO SparkContext - Created broadcast 545 from newAPIHadoopFile at PathSplitSource.java:96
20:14:33.627 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:14:33.627 INFO DAGScheduler - Got job 205 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:14:33.627 INFO DAGScheduler - Final stage: ResultStage 269 (collect at ReadsSparkSourceUnitTest.java:111)
20:14:33.627 INFO DAGScheduler - Parents of final stage: List()
20:14:33.627 INFO DAGScheduler - Missing parents: List()
20:14:33.627 INFO DAGScheduler - Submitting ResultStage 269 (ParallelCollectionRDD[1296] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:14:33.628 INFO MemoryStore - Block broadcast_546 stored as values in memory (estimated size 3.0 KiB, free 1919.3 MiB)
20:14:33.628 INFO MemoryStore - Block broadcast_546_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1919.3 MiB)
20:14:33.628 INFO BlockManagerInfo - Added broadcast_546_piece0 in memory on localhost:35739 (size: 1778.0 B, free: 1919.9 MiB)
20:14:33.628 INFO SparkContext - Created broadcast 546 from broadcast at DAGScheduler.scala:1580
20:14:33.628 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 269 (ParallelCollectionRDD[1296] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:14:33.629 INFO TaskSchedulerImpl - Adding task set 269.0 with 4 tasks resource profile 0
20:14:33.629 INFO TaskSetManager - Starting task 0.0 in stage 269.0 (TID 331) (localhost, executor driver, partition 0, PROCESS_LOCAL, 169886 bytes)
20:14:33.629 INFO TaskSetManager - Starting task 1.0 in stage 269.0 (TID 332) (localhost, executor driver, partition 1, PROCESS_LOCAL, 170093 bytes)
20:14:33.630 INFO TaskSetManager - Starting task 2.0 in stage 269.0 (TID 333) (localhost, executor driver, partition 2, PROCESS_LOCAL, 170081 bytes)
20:14:33.630 INFO TaskSetManager - Starting task 3.0 in stage 269.0 (TID 334) (localhost, executor driver, partition 3, PROCESS_LOCAL, 169419 bytes)
20:14:33.630 INFO Executor - Running task 0.0 in stage 269.0 (TID 331)
20:14:33.630 INFO Executor - Running task 1.0 in stage 269.0 (TID 332)
20:14:33.630 INFO Executor - Running task 3.0 in stage 269.0 (TID 334)
20:14:33.630 INFO Executor - Running task 2.0 in stage 269.0 (TID 333)
20:14:33.633 INFO Executor - Finished task 0.0 in stage 269.0 (TID 331). 163161 bytes result sent to driver
20:14:33.633 INFO Executor - Finished task 2.0 in stage 269.0 (TID 333). 163356 bytes result sent to driver
20:14:33.633 INFO Executor - Finished task 3.0 in stage 269.0 (TID 334). 162737 bytes result sent to driver
20:14:33.633 INFO Executor - Finished task 1.0 in stage 269.0 (TID 332). 163368 bytes result sent to driver
20:14:33.634 INFO TaskSetManager - Finished task 2.0 in stage 269.0 (TID 333) in 4 ms on localhost (executor driver) (1/4)
20:14:33.634 INFO TaskSetManager - Finished task 0.0 in stage 269.0 (TID 331) in 5 ms on localhost (executor driver) (2/4)
20:14:33.634 INFO TaskSetManager - Finished task 3.0 in stage 269.0 (TID 334) in 4 ms on localhost (executor driver) (3/4)
20:14:33.634 INFO TaskSetManager - Finished task 1.0 in stage 269.0 (TID 332) in 5 ms on localhost (executor driver) (4/4)
20:14:33.635 INFO TaskSchedulerImpl - Removed TaskSet 269.0, whose tasks have all completed, from pool
20:14:33.635 INFO DAGScheduler - ResultStage 269 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.007 s
20:14:33.635 INFO DAGScheduler - Job 205 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:33.635 INFO TaskSchedulerImpl - Killing all running tasks in stage 269: Stage finished
20:14:33.635 INFO DAGScheduler - Job 205 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.007622 s
20:14:33.635 INFO FileInputFormat - Total input files to process : 1
20:14:33.671 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:14:33.672 INFO DAGScheduler - Got job 206 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:14:33.672 INFO DAGScheduler - Final stage: ResultStage 270 (collect at ReadsSparkSourceUnitTest.java:112)
20:14:33.672 INFO DAGScheduler - Parents of final stage: List()
20:14:33.672 INFO DAGScheduler - Missing parents: List()
20:14:33.672 INFO DAGScheduler - Submitting ResultStage 270 (MapPartitionsRDD[1303] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:33.688 INFO MemoryStore - Block broadcast_547 stored as values in memory (estimated size 426.2 KiB, free 1918.9 MiB)
20:14:33.689 INFO MemoryStore - Block broadcast_547_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1918.7 MiB)
20:14:33.689 INFO BlockManagerInfo - Added broadcast_547_piece0 in memory on localhost:35739 (size: 153.7 KiB, free: 1919.8 MiB)
20:14:33.690 INFO SparkContext - Created broadcast 547 from broadcast at DAGScheduler.scala:1580
20:14:33.690 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 270 (MapPartitionsRDD[1303] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:33.690 INFO TaskSchedulerImpl - Adding task set 270.0 with 1 tasks resource profile 0
20:14:33.690 INFO TaskSetManager - Starting task 0.0 in stage 270.0 (TID 335) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7912 bytes)
20:14:33.690 INFO Executor - Running task 0.0 in stage 270.0 (TID 335)
20:14:33.719 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.bam:0+216896
20:14:33.730 INFO Executor - Finished task 0.0 in stage 270.0 (TID 335). 650141 bytes result sent to driver
20:14:33.731 INFO TaskSetManager - Finished task 0.0 in stage 270.0 (TID 335) in 41 ms on localhost (executor driver) (1/1)
20:14:33.731 INFO TaskSchedulerImpl - Removed TaskSet 270.0, whose tasks have all completed, from pool
20:14:33.732 INFO DAGScheduler - ResultStage 270 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.060 s
20:14:33.732 INFO DAGScheduler - Job 206 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:33.732 INFO TaskSchedulerImpl - Killing all running tasks in stage 270: Stage finished
20:14:33.732 INFO DAGScheduler - Job 206 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.060380 s
20:14:33.734 INFO MemoryStore - Block broadcast_548 stored as values in memory (estimated size 536.0 B, free 1918.7 MiB)
20:14:33.734 INFO MemoryStore - Block broadcast_548_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.7 MiB)
20:14:33.735 INFO BlockManagerInfo - Added broadcast_548_piece0 in memory on localhost:35739 (size: 187.0 B, free: 1919.7 MiB)
20:14:33.735 INFO SparkContext - Created broadcast 548 from broadcast at CramSource.java:114
20:14:33.736 INFO MemoryStore - Block broadcast_549 stored as values in memory (estimated size 297.9 KiB, free 1918.5 MiB)
20:14:33.742 INFO MemoryStore - Block broadcast_549_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.4 MiB)
20:14:33.742 INFO BlockManagerInfo - Added broadcast_549_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.7 MiB)
20:14:33.742 INFO SparkContext - Created broadcast 549 from newAPIHadoopFile at PathSplitSource.java:96
20:14:33.761 INFO MemoryStore - Block broadcast_550 stored as values in memory (estimated size 536.0 B, free 1918.4 MiB)
20:14:33.761 INFO MemoryStore - Block broadcast_550_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.4 MiB)
20:14:33.761 INFO BlockManagerInfo - Added broadcast_550_piece0 in memory on localhost:35739 (size: 187.0 B, free: 1919.7 MiB)
20:14:33.762 INFO SparkContext - Created broadcast 550 from broadcast at CramSource.java:114
20:14:33.762 INFO MemoryStore - Block broadcast_551 stored as values in memory (estimated size 297.9 KiB, free 1918.1 MiB)
20:14:33.768 INFO MemoryStore - Block broadcast_551_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.1 MiB)
20:14:33.768 INFO BlockManagerInfo - Added broadcast_551_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.7 MiB)
20:14:33.769 INFO SparkContext - Created broadcast 551 from newAPIHadoopFile at PathSplitSource.java:96
20:14:33.786 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:14:33.786 INFO DAGScheduler - Got job 207 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:14:33.786 INFO DAGScheduler - Final stage: ResultStage 271 (collect at ReadsSparkSourceUnitTest.java:111)
20:14:33.786 INFO DAGScheduler - Parents of final stage: List()
20:14:33.786 INFO DAGScheduler - Missing parents: List()
20:14:33.786 INFO DAGScheduler - Submitting ResultStage 271 (ParallelCollectionRDD[1308] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:14:33.786 INFO MemoryStore - Block broadcast_552 stored as values in memory (estimated size 3.0 KiB, free 1918.1 MiB)
20:14:33.787 INFO MemoryStore - Block broadcast_552_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.1 MiB)
20:14:33.787 INFO BlockManagerInfo - Added broadcast_552_piece0 in memory on localhost:35739 (size: 1778.0 B, free: 1919.6 MiB)
20:14:33.787 INFO SparkContext - Created broadcast 552 from broadcast at DAGScheduler.scala:1580
20:14:33.787 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 271 (ParallelCollectionRDD[1308] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:14:33.787 INFO TaskSchedulerImpl - Adding task set 271.0 with 4 tasks resource profile 0
20:14:33.788 INFO TaskSetManager - Starting task 0.0 in stage 271.0 (TID 336) (localhost, executor driver, partition 0, PROCESS_LOCAL, 45675 bytes)
20:14:33.788 INFO TaskSetManager - Starting task 1.0 in stage 271.0 (TID 337) (localhost, executor driver, partition 1, PROCESS_LOCAL, 45855 bytes)
20:14:33.788 INFO TaskSetManager - Starting task 2.0 in stage 271.0 (TID 338) (localhost, executor driver, partition 2, PROCESS_LOCAL, 45795 bytes)
20:14:33.789 INFO TaskSetManager - Starting task 3.0 in stage 271.0 (TID 339) (localhost, executor driver, partition 3, PROCESS_LOCAL, 46068 bytes)
20:14:33.789 INFO Executor - Running task 1.0 in stage 271.0 (TID 337)
20:14:33.789 INFO Executor - Running task 0.0 in stage 271.0 (TID 336)
20:14:33.789 INFO Executor - Running task 3.0 in stage 271.0 (TID 339)
20:14:33.789 INFO Executor - Running task 2.0 in stage 271.0 (TID 338)
20:14:33.790 INFO Executor - Finished task 3.0 in stage 271.0 (TID 339). 39300 bytes result sent to driver
20:14:33.790 INFO Executor - Finished task 2.0 in stage 271.0 (TID 338). 39027 bytes result sent to driver
20:14:33.790 INFO Executor - Finished task 0.0 in stage 271.0 (TID 336). 38907 bytes result sent to driver
20:14:33.791 INFO Executor - Finished task 1.0 in stage 271.0 (TID 337). 39130 bytes result sent to driver
20:14:33.791 INFO TaskSetManager - Finished task 2.0 in stage 271.0 (TID 338) in 3 ms on localhost (executor driver) (1/4)
20:14:33.791 INFO TaskSetManager - Finished task 0.0 in stage 271.0 (TID 336) in 3 ms on localhost (executor driver) (2/4)
20:14:33.791 INFO TaskSetManager - Finished task 3.0 in stage 271.0 (TID 339) in 2 ms on localhost (executor driver) (3/4)
20:14:33.791 INFO TaskSetManager - Finished task 1.0 in stage 271.0 (TID 337) in 3 ms on localhost (executor driver) (4/4)
20:14:33.791 INFO TaskSchedulerImpl - Removed TaskSet 271.0, whose tasks have all completed, from pool
20:14:33.791 INFO DAGScheduler - ResultStage 271 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.005 s
20:14:33.791 INFO DAGScheduler - Job 207 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:33.791 INFO TaskSchedulerImpl - Killing all running tasks in stage 271: Stage finished
20:14:33.791 INFO DAGScheduler - Job 207 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.005697 s
20:14:33.792 INFO FileInputFormat - Total input files to process : 1
20:14:33.817 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:14:33.817 INFO DAGScheduler - Got job 208 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:14:33.817 INFO DAGScheduler - Final stage: ResultStage 272 (collect at ReadsSparkSourceUnitTest.java:112)
20:14:33.817 INFO DAGScheduler - Parents of final stage: List()
20:14:33.817 INFO DAGScheduler - Missing parents: List()
20:14:33.817 INFO DAGScheduler - Submitting ResultStage 272 (MapPartitionsRDD[1314] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:33.828 INFO MemoryStore - Block broadcast_553 stored as values in memory (estimated size 286.8 KiB, free 1917.8 MiB)
20:14:33.829 INFO MemoryStore - Block broadcast_553_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1917.7 MiB)
20:14:33.829 INFO BlockManagerInfo - Added broadcast_553_piece0 in memory on localhost:35739 (size: 103.6 KiB, free: 1919.5 MiB)
20:14:33.829 INFO SparkContext - Created broadcast 553 from broadcast at DAGScheduler.scala:1580
20:14:33.830 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 272 (MapPartitionsRDD[1314] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:33.830 INFO TaskSchedulerImpl - Adding task set 272.0 with 1 tasks resource profile 0
20:14:33.830 INFO TaskSetManager - Starting task 0.0 in stage 272.0 (TID 340) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes)
20:14:33.830 INFO Executor - Running task 0.0 in stage 272.0 (TID 340)
20:14:33.852 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
20:14:33.859 INFO Executor - Finished task 0.0 in stage 272.0 (TID 340). 154101 bytes result sent to driver
20:14:33.860 INFO TaskSetManager - Finished task 0.0 in stage 272.0 (TID 340) in 30 ms on localhost (executor driver) (1/1)
20:14:33.860 INFO TaskSchedulerImpl - Removed TaskSet 272.0, whose tasks have all completed, from pool
20:14:33.860 INFO DAGScheduler - ResultStage 272 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.042 s
20:14:33.860 INFO DAGScheduler - Job 208 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:33.860 INFO TaskSchedulerImpl - Killing all running tasks in stage 272: Stage finished
20:14:33.861 INFO DAGScheduler - Job 208 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.043477 s
20:14:33.863 INFO MemoryStore - Block broadcast_554 stored as values in memory (estimated size 608.0 B, free 1917.7 MiB)
20:14:33.864 INFO MemoryStore - Block broadcast_554_piece0 stored as bytes in memory (estimated size 206.0 B, free 1917.7 MiB)
20:14:33.864 INFO BlockManagerInfo - Added broadcast_554_piece0 in memory on localhost:35739 (size: 206.0 B, free: 1919.5 MiB)
20:14:33.864 INFO SparkContext - Created broadcast 554 from broadcast at CramSource.java:114
20:14:33.865 INFO MemoryStore - Block broadcast_555 stored as values in memory (estimated size 297.9 KiB, free 1917.4 MiB)
20:14:33.871 INFO BlockManagerInfo - Removed broadcast_550_piece0 on localhost:35739 in memory (size: 187.0 B, free: 1919.5 MiB)
20:14:33.871 INFO BlockManagerInfo - Removed broadcast_546_piece0 on localhost:35739 in memory (size: 1778.0 B, free: 1919.6 MiB)
20:14:33.872 INFO BlockManagerInfo - Removed broadcast_548_piece0 on localhost:35739 in memory (size: 187.0 B, free: 1919.6 MiB)
20:14:33.872 INFO BlockManagerInfo - Removed broadcast_547_piece0 on localhost:35739 in memory (size: 153.7 KiB, free: 1919.7 MiB)
20:14:33.873 INFO BlockManagerInfo - Removed broadcast_544_piece0 on localhost:35739 in memory (size: 50.3 KiB, free: 1919.7 MiB)
20:14:33.873 INFO BlockManagerInfo - Removed broadcast_552_piece0 on localhost:35739 in memory (size: 1778.0 B, free: 1919.8 MiB)
20:14:33.873 INFO BlockManagerInfo - Removed broadcast_553_piece0 on localhost:35739 in memory (size: 103.6 KiB, free: 1919.9 MiB)
20:14:33.874 INFO BlockManagerInfo - Removed broadcast_549_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.9 MiB)
20:14:33.874 INFO BlockManagerInfo - Removed broadcast_551_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1920.0 MiB)
20:14:33.874 INFO BlockManagerInfo - Removed broadcast_545_piece0 on localhost:35739 in memory (size: 50.3 KiB, free: 1920.0 MiB)
20:14:33.877 INFO MemoryStore - Block broadcast_555_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
20:14:33.877 INFO BlockManagerInfo - Added broadcast_555_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1920.0 MiB)
20:14:33.877 INFO SparkContext - Created broadcast 555 from newAPIHadoopFile at PathSplitSource.java:96
20:14:33.895 INFO MemoryStore - Block broadcast_556 stored as values in memory (estimated size 608.0 B, free 1919.7 MiB)
20:14:33.895 INFO MemoryStore - Block broadcast_556_piece0 stored as bytes in memory (estimated size 206.0 B, free 1919.7 MiB)
20:14:33.895 INFO BlockManagerInfo - Added broadcast_556_piece0 in memory on localhost:35739 (size: 206.0 B, free: 1920.0 MiB)
20:14:33.896 INFO SparkContext - Created broadcast 556 from broadcast at CramSource.java:114
20:14:33.896 INFO MemoryStore - Block broadcast_557 stored as values in memory (estimated size 297.9 KiB, free 1919.4 MiB)
20:14:33.902 INFO MemoryStore - Block broadcast_557_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.3 MiB)
20:14:33.902 INFO BlockManagerInfo - Added broadcast_557_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.9 MiB)
20:14:33.903 INFO SparkContext - Created broadcast 557 from newAPIHadoopFile at PathSplitSource.java:96
20:14:33.919 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:14:33.920 INFO DAGScheduler - Got job 209 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:14:33.920 INFO DAGScheduler - Final stage: ResultStage 273 (collect at ReadsSparkSourceUnitTest.java:111)
20:14:33.920 INFO DAGScheduler - Parents of final stage: List()
20:14:33.920 INFO DAGScheduler - Missing parents: List()
20:14:33.920 INFO DAGScheduler - Submitting ResultStage 273 (ParallelCollectionRDD[1319] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:14:33.920 INFO MemoryStore - Block broadcast_558 stored as values in memory (estimated size 3.0 KiB, free 1919.3 MiB)
20:14:33.921 INFO MemoryStore - Block broadcast_558_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1919.3 MiB)
20:14:33.921 INFO BlockManagerInfo - Added broadcast_558_piece0 in memory on localhost:35739 (size: 1778.0 B, free: 1919.9 MiB)
20:14:33.921 INFO SparkContext - Created broadcast 558 from broadcast at DAGScheduler.scala:1580
20:14:33.921 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 273 (ParallelCollectionRDD[1319] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:14:33.921 INFO TaskSchedulerImpl - Adding task set 273.0 with 4 tasks resource profile 0
20:14:33.922 INFO TaskSetManager - Starting task 0.0 in stage 273.0 (TID 341) (localhost, executor driver, partition 0, PROCESS_LOCAL, 8303 bytes)
20:14:33.922 INFO TaskSetManager - Starting task 1.0 in stage 273.0 (TID 342) (localhost, executor driver, partition 1, PROCESS_LOCAL, 8292 bytes)
20:14:33.922 INFO TaskSetManager - Starting task 2.0 in stage 273.0 (TID 343) (localhost, executor driver, partition 2, PROCESS_LOCAL, 8292 bytes)
20:14:33.922 INFO TaskSetManager - Starting task 3.0 in stage 273.0 (TID 344) (localhost, executor driver, partition 3, PROCESS_LOCAL, 8184 bytes)
20:14:33.922 INFO Executor - Running task 0.0 in stage 273.0 (TID 341)
20:14:33.922 INFO Executor - Running task 1.0 in stage 273.0 (TID 342)
20:14:33.922 INFO Executor - Running task 2.0 in stage 273.0 (TID 343)
20:14:33.922 INFO Executor - Running task 3.0 in stage 273.0 (TID 344)
20:14:33.923 INFO Executor - Finished task 1.0 in stage 273.0 (TID 342). 1524 bytes result sent to driver
20:14:33.923 INFO Executor - Finished task 2.0 in stage 273.0 (TID 343). 1524 bytes result sent to driver
20:14:33.924 INFO Executor - Finished task 3.0 in stage 273.0 (TID 344). 1416 bytes result sent to driver
20:14:33.924 INFO Executor - Finished task 0.0 in stage 273.0 (TID 341). 1578 bytes result sent to driver
20:14:33.924 INFO TaskSetManager - Finished task 1.0 in stage 273.0 (TID 342) in 2 ms on localhost (executor driver) (1/4)
20:14:33.924 INFO TaskSetManager - Finished task 2.0 in stage 273.0 (TID 343) in 2 ms on localhost (executor driver) (2/4)
20:14:33.924 INFO TaskSetManager - Finished task 3.0 in stage 273.0 (TID 344) in 2 ms on localhost (executor driver) (3/4)
20:14:33.924 INFO TaskSetManager - Finished task 0.0 in stage 273.0 (TID 341) in 3 ms on localhost (executor driver) (4/4)
20:14:33.924 INFO TaskSchedulerImpl - Removed TaskSet 273.0, whose tasks have all completed, from pool
20:14:33.924 INFO DAGScheduler - ResultStage 273 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.004 s
20:14:33.924 INFO DAGScheduler - Job 209 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:33.924 INFO TaskSchedulerImpl - Killing all running tasks in stage 273: Stage finished
20:14:33.925 INFO DAGScheduler - Job 209 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.005084 s
20:14:33.925 INFO FileInputFormat - Total input files to process : 1
20:14:33.950 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:14:33.951 INFO DAGScheduler - Got job 210 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:14:33.951 INFO DAGScheduler - Final stage: ResultStage 274 (collect at ReadsSparkSourceUnitTest.java:112)
20:14:33.951 INFO DAGScheduler - Parents of final stage: List()
20:14:33.951 INFO DAGScheduler - Missing parents: List()
20:14:33.951 INFO DAGScheduler - Submitting ResultStage 274 (MapPartitionsRDD[1325] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:33.962 INFO MemoryStore - Block broadcast_559 stored as values in memory (estimated size 286.8 KiB, free 1919.0 MiB)
20:14:33.963 INFO MemoryStore - Block broadcast_559_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.9 MiB)
20:14:33.963 INFO BlockManagerInfo - Added broadcast_559_piece0 in memory on localhost:35739 (size: 103.6 KiB, free: 1919.8 MiB)
20:14:33.963 INFO SparkContext - Created broadcast 559 from broadcast at DAGScheduler.scala:1580
20:14:33.963 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 274 (MapPartitionsRDD[1325] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:33.963 INFO TaskSchedulerImpl - Adding task set 274.0 with 1 tasks resource profile 0
20:14:33.964 INFO TaskSetManager - Starting task 0.0 in stage 274.0 (TID 345) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7851 bytes)
20:14:33.964 INFO Executor - Running task 0.0 in stage 274.0 (TID 345)
20:14:33.984 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.cram:0+13330
20:14:33.987 INFO Executor - Finished task 0.0 in stage 274.0 (TID 345). 3736 bytes result sent to driver
20:14:33.987 INFO TaskSetManager - Finished task 0.0 in stage 274.0 (TID 345) in 23 ms on localhost (executor driver) (1/1)
20:14:33.988 INFO TaskSchedulerImpl - Removed TaskSet 274.0, whose tasks have all completed, from pool
20:14:33.988 INFO DAGScheduler - ResultStage 274 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.037 s
20:14:33.988 INFO DAGScheduler - Job 210 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:33.988 INFO TaskSchedulerImpl - Killing all running tasks in stage 274: Stage finished
20:14:33.988 INFO DAGScheduler - Job 210 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.037472 s
20:14:33.990 INFO MemoryStore - Block broadcast_560 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
20:14:33.996 INFO MemoryStore - Block broadcast_560_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
20:14:33.996 INFO BlockManagerInfo - Added broadcast_560_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.7 MiB)
20:14:33.997 INFO SparkContext - Created broadcast 560 from newAPIHadoopFile at PathSplitSource.java:96
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
20:14:34.020 INFO MemoryStore - Block broadcast_561 stored as values in memory (estimated size 297.9 KiB, free 1918.3 MiB)
20:14:34.026 INFO MemoryStore - Block broadcast_561_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.3 MiB)
20:14:34.026 INFO BlockManagerInfo - Added broadcast_561_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.7 MiB)
20:14:34.026 INFO SparkContext - Created broadcast 561 from newAPIHadoopFile at PathSplitSource.java:96
20:14:34.048 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:14:34.049 INFO DAGScheduler - Got job 211 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:14:34.049 INFO DAGScheduler - Final stage: ResultStage 275 (collect at ReadsSparkSourceUnitTest.java:111)
20:14:34.049 INFO DAGScheduler - Parents of final stage: List()
20:14:34.049 INFO DAGScheduler - Missing parents: List()
20:14:34.049 INFO DAGScheduler - Submitting ResultStage 275 (ParallelCollectionRDD[1331] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:14:34.049 INFO MemoryStore - Block broadcast_562 stored as values in memory (estimated size 3.0 KiB, free 1918.2 MiB)
20:14:34.050 INFO MemoryStore - Block broadcast_562_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.2 MiB)
20:14:34.050 INFO BlockManagerInfo - Added broadcast_562_piece0 in memory on localhost:35739 (size: 1778.0 B, free: 1919.7 MiB)
20:14:34.050 INFO SparkContext - Created broadcast 562 from broadcast at DAGScheduler.scala:1580
20:14:34.050 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 275 (ParallelCollectionRDD[1331] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:14:34.050 INFO TaskSchedulerImpl - Adding task set 275.0 with 4 tasks resource profile 0
20:14:34.051 INFO TaskSetManager - Starting task 0.0 in stage 275.0 (TID 346) (localhost, executor driver, partition 0, PROCESS_LOCAL, 47080 bytes)
20:14:34.051 INFO TaskSetManager - Starting task 1.0 in stage 275.0 (TID 347) (localhost, executor driver, partition 1, PROCESS_LOCAL, 47194 bytes)
20:14:34.051 INFO TaskSetManager - Starting task 2.0 in stage 275.0 (TID 348) (localhost, executor driver, partition 2, PROCESS_LOCAL, 47105 bytes)
20:14:34.051 INFO TaskSetManager - Starting task 3.0 in stage 275.0 (TID 349) (localhost, executor driver, partition 3, PROCESS_LOCAL, 47530 bytes)
20:14:34.051 INFO Executor - Running task 2.0 in stage 275.0 (TID 348)
20:14:34.051 INFO Executor - Running task 3.0 in stage 275.0 (TID 349)
20:14:34.052 INFO Executor - Running task 0.0 in stage 275.0 (TID 346)
20:14:34.052 INFO Executor - Running task 1.0 in stage 275.0 (TID 347)
20:14:34.053 INFO Executor - Finished task 2.0 in stage 275.0 (TID 348). 40337 bytes result sent to driver
20:14:34.053 INFO Executor - Finished task 0.0 in stage 275.0 (TID 346). 40312 bytes result sent to driver
20:14:34.053 INFO Executor - Finished task 1.0 in stage 275.0 (TID 347). 40426 bytes result sent to driver
20:14:34.053 INFO TaskSetManager - Finished task 2.0 in stage 275.0 (TID 348) in 2 ms on localhost (executor driver) (1/4)
20:14:34.053 INFO Executor - Finished task 3.0 in stage 275.0 (TID 349). 40805 bytes result sent to driver
20:14:34.054 INFO TaskSetManager - Finished task 0.0 in stage 275.0 (TID 346) in 4 ms on localhost (executor driver) (2/4)
20:14:34.054 INFO TaskSetManager - Finished task 1.0 in stage 275.0 (TID 347) in 3 ms on localhost (executor driver) (3/4)
20:14:34.054 INFO TaskSetManager - Finished task 3.0 in stage 275.0 (TID 349) in 3 ms on localhost (executor driver) (4/4)
20:14:34.054 INFO DAGScheduler - ResultStage 275 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.005 s
20:14:34.054 INFO DAGScheduler - Job 211 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:34.055 INFO TaskSchedulerImpl - Removed TaskSet 275.0, whose tasks have all completed, from pool
20:14:34.055 INFO TaskSchedulerImpl - Killing all running tasks in stage 275: Stage finished
20:14:34.055 INFO DAGScheduler - Job 211 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.006269 s
20:14:34.055 INFO FileInputFormat - Total input files to process : 1
20:14:34.092 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:14:34.093 INFO DAGScheduler - Got job 212 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:14:34.093 INFO DAGScheduler - Final stage: ResultStage 276 (collect at ReadsSparkSourceUnitTest.java:112)
20:14:34.093 INFO DAGScheduler - Parents of final stage: List()
20:14:34.093 INFO DAGScheduler - Missing parents: List()
20:14:34.093 INFO DAGScheduler - Submitting ResultStage 276 (MapPartitionsRDD[1338] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:34.109 INFO MemoryStore - Block broadcast_563 stored as values in memory (estimated size 426.2 KiB, free 1917.8 MiB)
20:14:34.110 INFO MemoryStore - Block broadcast_563_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1917.7 MiB)
20:14:34.111 INFO BlockManagerInfo - Added broadcast_563_piece0 in memory on localhost:35739 (size: 153.7 KiB, free: 1919.5 MiB)
20:14:34.111 INFO SparkContext - Created broadcast 563 from broadcast at DAGScheduler.scala:1580
20:14:34.111 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 276 (MapPartitionsRDD[1338] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:34.111 INFO TaskSchedulerImpl - Adding task set 276.0 with 1 tasks resource profile 0
20:14:34.111 INFO TaskSetManager - Starting task 0.0 in stage 276.0 (TID 350) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes)
20:14:34.112 INFO Executor - Running task 0.0 in stage 276.0 (TID 350)
20:14:34.140 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
20:14:34.145 INFO Executor - Finished task 0.0 in stage 276.0 (TID 350). 159572 bytes result sent to driver
20:14:34.146 INFO TaskSetManager - Finished task 0.0 in stage 276.0 (TID 350) in 35 ms on localhost (executor driver) (1/1)
20:14:34.146 INFO TaskSchedulerImpl - Removed TaskSet 276.0, whose tasks have all completed, from pool
20:14:34.146 INFO DAGScheduler - ResultStage 276 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.053 s
20:14:34.146 INFO DAGScheduler - Job 212 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:34.146 INFO TaskSchedulerImpl - Killing all running tasks in stage 276: Stage finished
20:14:34.146 INFO DAGScheduler - Job 212 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.053989 s
20:14:34.149 INFO MemoryStore - Block broadcast_564 stored as values in memory (estimated size 536.0 B, free 1917.7 MiB)
20:14:34.149 INFO MemoryStore - Block broadcast_564_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.7 MiB)
20:14:34.149 INFO BlockManagerInfo - Added broadcast_564_piece0 in memory on localhost:35739 (size: 187.0 B, free: 1919.5 MiB)
20:14:34.150 INFO SparkContext - Created broadcast 564 from broadcast at CramSource.java:114
20:14:34.151 INFO MemoryStore - Block broadcast_565 stored as values in memory (estimated size 297.9 KiB, free 1917.4 MiB)
20:14:34.161 INFO MemoryStore - Block broadcast_565_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.3 MiB)
20:14:34.161 INFO BlockManagerInfo - Added broadcast_565_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.5 MiB)
20:14:34.161 INFO SparkContext - Created broadcast 565 from newAPIHadoopFile at PathSplitSource.java:96
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
20:14:34.189 INFO MemoryStore - Block broadcast_566 stored as values in memory (estimated size 536.0 B, free 1917.3 MiB)
20:14:34.194 INFO MemoryStore - Block broadcast_566_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.3 MiB)
20:14:34.194 INFO BlockManagerInfo - Added broadcast_566_piece0 in memory on localhost:35739 (size: 187.0 B, free: 1919.5 MiB)
20:14:34.194 INFO BlockManagerInfo - Removed broadcast_555_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.5 MiB)
20:14:34.194 INFO SparkContext - Created broadcast 566 from broadcast at CramSource.java:114
20:14:34.195 INFO BlockManagerInfo - Removed broadcast_565_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.6 MiB)
20:14:34.195 INFO BlockManagerInfo - Removed broadcast_562_piece0 on localhost:35739 in memory (size: 1778.0 B, free: 1919.6 MiB)
20:14:34.195 INFO BlockManagerInfo - Removed broadcast_561_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.6 MiB)
20:14:34.195 INFO MemoryStore - Block broadcast_567 stored as values in memory (estimated size 297.9 KiB, free 1917.8 MiB)
20:14:34.196 INFO BlockManagerInfo - Removed broadcast_556_piece0 on localhost:35739 in memory (size: 206.0 B, free: 1919.6 MiB)
20:14:34.196 INFO BlockManagerInfo - Removed broadcast_564_piece0 on localhost:35739 in memory (size: 187.0 B, free: 1919.6 MiB)
20:14:34.196 INFO BlockManagerInfo - Removed broadcast_563_piece0 on localhost:35739 in memory (size: 153.7 KiB, free: 1919.8 MiB)
20:14:34.197 INFO BlockManagerInfo - Removed broadcast_554_piece0 on localhost:35739 in memory (size: 206.0 B, free: 1919.8 MiB)
20:14:34.197 INFO BlockManagerInfo - Removed broadcast_557_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.8 MiB)
20:14:34.198 INFO BlockManagerInfo - Removed broadcast_559_piece0 on localhost:35739 in memory (size: 103.6 KiB, free: 1919.9 MiB)
20:14:34.198 INFO BlockManagerInfo - Removed broadcast_558_piece0 on localhost:35739 in memory (size: 1778.0 B, free: 1920.0 MiB)
20:14:34.198 INFO BlockManagerInfo - Removed broadcast_560_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1920.0 MiB)
20:14:34.203 INFO MemoryStore - Block broadcast_567_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
20:14:34.203 INFO BlockManagerInfo - Added broadcast_567_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1920.0 MiB)
20:14:34.203 INFO SparkContext - Created broadcast 567 from newAPIHadoopFile at PathSplitSource.java:96
20:14:34.220 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:14:34.220 INFO DAGScheduler - Got job 213 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:14:34.220 INFO DAGScheduler - Final stage: ResultStage 277 (collect at ReadsSparkSourceUnitTest.java:111)
20:14:34.220 INFO DAGScheduler - Parents of final stage: List()
20:14:34.220 INFO DAGScheduler - Missing parents: List()
20:14:34.220 INFO DAGScheduler - Submitting ResultStage 277 (ParallelCollectionRDD[1343] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:14:34.221 INFO MemoryStore - Block broadcast_568 stored as values in memory (estimated size 3.0 KiB, free 1919.7 MiB)
20:14:34.221 INFO MemoryStore - Block broadcast_568_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1919.7 MiB)
20:14:34.221 INFO BlockManagerInfo - Added broadcast_568_piece0 in memory on localhost:35739 (size: 1778.0 B, free: 1919.9 MiB)
20:14:34.221 INFO SparkContext - Created broadcast 568 from broadcast at DAGScheduler.scala:1580
20:14:34.221 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 277 (ParallelCollectionRDD[1343] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:14:34.221 INFO TaskSchedulerImpl - Adding task set 277.0 with 4 tasks resource profile 0
20:14:34.222 INFO TaskSetManager - Starting task 0.0 in stage 277.0 (TID 351) (localhost, executor driver, partition 0, PROCESS_LOCAL, 45675 bytes)
20:14:34.222 INFO TaskSetManager - Starting task 1.0 in stage 277.0 (TID 352) (localhost, executor driver, partition 1, PROCESS_LOCAL, 45855 bytes)
20:14:34.223 INFO TaskSetManager - Starting task 2.0 in stage 277.0 (TID 353) (localhost, executor driver, partition 2, PROCESS_LOCAL, 45795 bytes)
20:14:34.223 INFO TaskSetManager - Starting task 3.0 in stage 277.0 (TID 354) (localhost, executor driver, partition 3, PROCESS_LOCAL, 46068 bytes)
20:14:34.223 INFO Executor - Running task 0.0 in stage 277.0 (TID 351)
20:14:34.223 INFO Executor - Running task 1.0 in stage 277.0 (TID 352)
20:14:34.223 INFO Executor - Running task 2.0 in stage 277.0 (TID 353)
20:14:34.224 INFO Executor - Running task 3.0 in stage 277.0 (TID 354)
20:14:34.225 INFO Executor - Finished task 3.0 in stage 277.0 (TID 354). 39300 bytes result sent to driver
20:14:34.225 INFO Executor - Finished task 0.0 in stage 277.0 (TID 351). 38950 bytes result sent to driver
20:14:34.225 INFO Executor - Finished task 2.0 in stage 277.0 (TID 353). 39070 bytes result sent to driver
20:14:34.225 INFO Executor - Finished task 1.0 in stage 277.0 (TID 352). 39130 bytes result sent to driver
20:14:34.226 INFO TaskSetManager - Finished task 3.0 in stage 277.0 (TID 354) in 3 ms on localhost (executor driver) (1/4)
20:14:34.226 INFO TaskSetManager - Finished task 2.0 in stage 277.0 (TID 353) in 3 ms on localhost (executor driver) (2/4)
20:14:34.226 INFO TaskSetManager - Finished task 0.0 in stage 277.0 (TID 351) in 4 ms on localhost (executor driver) (3/4)
20:14:34.227 INFO TaskSetManager - Finished task 1.0 in stage 277.0 (TID 352) in 5 ms on localhost (executor driver) (4/4)
20:14:34.227 INFO TaskSchedulerImpl - Removed TaskSet 277.0, whose tasks have all completed, from pool
20:14:34.228 INFO DAGScheduler - ResultStage 277 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.008 s
20:14:34.228 INFO DAGScheduler - Job 213 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:34.228 INFO TaskSchedulerImpl - Killing all running tasks in stage 277: Stage finished
20:14:34.228 INFO DAGScheduler - Job 213 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.008133 s
20:14:34.229 INFO FileInputFormat - Total input files to process : 1
20:14:34.255 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:14:34.255 INFO DAGScheduler - Got job 214 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:14:34.255 INFO DAGScheduler - Final stage: ResultStage 278 (collect at ReadsSparkSourceUnitTest.java:112)
20:14:34.255 INFO DAGScheduler - Parents of final stage: List()
20:14:34.255 INFO DAGScheduler - Missing parents: List()
20:14:34.256 INFO DAGScheduler - Submitting ResultStage 278 (MapPartitionsRDD[1349] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:34.267 INFO MemoryStore - Block broadcast_569 stored as values in memory (estimated size 286.8 KiB, free 1919.4 MiB)
20:14:34.267 INFO MemoryStore - Block broadcast_569_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1919.3 MiB)
20:14:34.268 INFO BlockManagerInfo - Added broadcast_569_piece0 in memory on localhost:35739 (size: 103.6 KiB, free: 1919.8 MiB)
20:14:34.268 INFO SparkContext - Created broadcast 569 from broadcast at DAGScheduler.scala:1580
20:14:34.268 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 278 (MapPartitionsRDD[1349] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:34.268 INFO TaskSchedulerImpl - Adding task set 278.0 with 1 tasks resource profile 0
20:14:34.268 INFO TaskSetManager - Starting task 0.0 in stage 278.0 (TID 355) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes)
20:14:34.269 INFO Executor - Running task 0.0 in stage 278.0 (TID 355)
20:14:34.289 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
20:14:34.296 INFO Executor - Finished task 0.0 in stage 278.0 (TID 355). 154058 bytes result sent to driver
20:14:34.297 INFO TaskSetManager - Finished task 0.0 in stage 278.0 (TID 355) in 29 ms on localhost (executor driver) (1/1)
20:14:34.297 INFO TaskSchedulerImpl - Removed TaskSet 278.0, whose tasks have all completed, from pool
20:14:34.297 INFO DAGScheduler - ResultStage 278 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.041 s
20:14:34.297 INFO DAGScheduler - Job 214 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:34.297 INFO TaskSchedulerImpl - Killing all running tasks in stage 278: Stage finished
20:14:34.298 INFO DAGScheduler - Job 214 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.042404 s
20:14:34.300 INFO MemoryStore - Block broadcast_570 stored as values in memory (estimated size 297.9 KiB, free 1919.0 MiB)
20:14:34.306 INFO MemoryStore - Block broadcast_570_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.9 MiB)
20:14:34.307 INFO BlockManagerInfo - Added broadcast_570_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.8 MiB)
20:14:34.307 INFO SparkContext - Created broadcast 570 from newAPIHadoopFile at PathSplitSource.java:96
20:14:34.328 INFO MemoryStore - Block broadcast_571 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
20:14:34.335 INFO MemoryStore - Block broadcast_571_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
20:14:34.335 INFO BlockManagerInfo - Added broadcast_571_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.7 MiB)
20:14:34.335 INFO SparkContext - Created broadcast 571 from newAPIHadoopFile at PathSplitSource.java:96
20:14:34.357 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:14:34.358 INFO DAGScheduler - Got job 215 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:14:34.358 INFO DAGScheduler - Final stage: ResultStage 279 (collect at ReadsSparkSourceUnitTest.java:111)
20:14:34.358 INFO DAGScheduler - Parents of final stage: List()
20:14:34.358 INFO DAGScheduler - Missing parents: List()
20:14:34.358 INFO DAGScheduler - Submitting ResultStage 279 (ParallelCollectionRDD[1355] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:14:34.358 INFO MemoryStore - Block broadcast_572 stored as values in memory (estimated size 3.0 KiB, free 1918.6 MiB)
20:14:34.358 INFO MemoryStore - Block broadcast_572_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.6 MiB)
20:14:34.358 INFO BlockManagerInfo - Added broadcast_572_piece0 in memory on localhost:35739 (size: 1778.0 B, free: 1919.7 MiB)
20:14:34.359 INFO SparkContext - Created broadcast 572 from broadcast at DAGScheduler.scala:1580
20:14:34.359 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 279 (ParallelCollectionRDD[1355] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:14:34.359 INFO TaskSchedulerImpl - Adding task set 279.0 with 4 tasks resource profile 0
20:14:34.359 INFO TaskSetManager - Starting task 0.0 in stage 279.0 (TID 356) (localhost, executor driver, partition 0, PROCESS_LOCAL, 47080 bytes)
20:14:34.360 INFO TaskSetManager - Starting task 1.0 in stage 279.0 (TID 357) (localhost, executor driver, partition 1, PROCESS_LOCAL, 47194 bytes)
20:14:34.360 INFO TaskSetManager - Starting task 2.0 in stage 279.0 (TID 358) (localhost, executor driver, partition 2, PROCESS_LOCAL, 47105 bytes)
20:14:34.360 INFO TaskSetManager - Starting task 3.0 in stage 279.0 (TID 359) (localhost, executor driver, partition 3, PROCESS_LOCAL, 47530 bytes)
20:14:34.360 INFO Executor - Running task 2.0 in stage 279.0 (TID 358)
20:14:34.360 INFO Executor - Running task 0.0 in stage 279.0 (TID 356)
20:14:34.360 INFO Executor - Running task 1.0 in stage 279.0 (TID 357)
20:14:34.360 INFO Executor - Running task 3.0 in stage 279.0 (TID 359)
20:14:34.362 INFO Executor - Finished task 2.0 in stage 279.0 (TID 358). 40337 bytes result sent to driver
20:14:34.362 INFO Executor - Finished task 1.0 in stage 279.0 (TID 357). 40426 bytes result sent to driver
20:14:34.362 INFO Executor - Finished task 3.0 in stage 279.0 (TID 359). 40805 bytes result sent to driver
20:14:34.362 INFO TaskSetManager - Finished task 2.0 in stage 279.0 (TID 358) in 2 ms on localhost (executor driver) (1/4)
20:14:34.362 INFO TaskSetManager - Finished task 1.0 in stage 279.0 (TID 357) in 3 ms on localhost (executor driver) (2/4)
20:14:34.362 INFO Executor - Finished task 0.0 in stage 279.0 (TID 356). 40355 bytes result sent to driver
20:14:34.363 INFO TaskSetManager - Finished task 3.0 in stage 279.0 (TID 359) in 3 ms on localhost (executor driver) (3/4)
20:14:34.363 INFO TaskSetManager - Finished task 0.0 in stage 279.0 (TID 356) in 4 ms on localhost (executor driver) (4/4)
20:14:34.363 INFO TaskSchedulerImpl - Removed TaskSet 279.0, whose tasks have all completed, from pool
20:14:34.363 INFO DAGScheduler - ResultStage 279 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.005 s
20:14:34.363 INFO DAGScheduler - Job 215 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:34.363 INFO TaskSchedulerImpl - Killing all running tasks in stage 279: Stage finished
20:14:34.363 INFO DAGScheduler - Job 215 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.005693 s
20:14:34.364 INFO FileInputFormat - Total input files to process : 1
20:14:34.405 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:14:34.405 INFO DAGScheduler - Got job 216 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:14:34.405 INFO DAGScheduler - Final stage: ResultStage 280 (collect at ReadsSparkSourceUnitTest.java:112)
20:14:34.405 INFO DAGScheduler - Parents of final stage: List()
20:14:34.405 INFO DAGScheduler - Missing parents: List()
20:14:34.405 INFO DAGScheduler - Submitting ResultStage 280 (MapPartitionsRDD[1362] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:34.421 INFO MemoryStore - Block broadcast_573 stored as values in memory (estimated size 426.2 KiB, free 1918.2 MiB)
20:14:34.423 INFO MemoryStore - Block broadcast_573_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1918.0 MiB)
20:14:34.423 INFO BlockManagerInfo - Added broadcast_573_piece0 in memory on localhost:35739 (size: 153.7 KiB, free: 1919.6 MiB)
20:14:34.423 INFO SparkContext - Created broadcast 573 from broadcast at DAGScheduler.scala:1580
20:14:34.423 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 280 (MapPartitionsRDD[1362] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:34.423 INFO TaskSchedulerImpl - Adding task set 280.0 with 1 tasks resource profile 0
20:14:34.423 INFO TaskSetManager - Starting task 0.0 in stage 280.0 (TID 360) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes)
20:14:34.424 INFO Executor - Running task 0.0 in stage 280.0 (TID 360)
20:14:34.452 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
20:14:34.455 INFO Executor - Finished task 0.0 in stage 280.0 (TID 360). 159572 bytes result sent to driver
20:14:34.456 INFO TaskSetManager - Finished task 0.0 in stage 280.0 (TID 360) in 33 ms on localhost (executor driver) (1/1)
20:14:34.456 INFO TaskSchedulerImpl - Removed TaskSet 280.0, whose tasks have all completed, from pool
20:14:34.456 INFO DAGScheduler - ResultStage 280 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.051 s
20:14:34.456 INFO DAGScheduler - Job 216 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:34.456 INFO TaskSchedulerImpl - Killing all running tasks in stage 280: Stage finished
20:14:34.456 INFO DAGScheduler - Job 216 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.051330 s
20:14:34.458 INFO MemoryStore - Block broadcast_574 stored as values in memory (estimated size 536.0 B, free 1918.0 MiB)
20:14:34.459 INFO MemoryStore - Block broadcast_574_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.0 MiB)
20:14:34.459 INFO BlockManagerInfo - Added broadcast_574_piece0 in memory on localhost:35739 (size: 187.0 B, free: 1919.6 MiB)
20:14:34.459 INFO SparkContext - Created broadcast 574 from broadcast at CramSource.java:114
20:14:34.460 INFO MemoryStore - Block broadcast_575 stored as values in memory (estimated size 297.9 KiB, free 1917.7 MiB)
20:14:34.467 INFO MemoryStore - Block broadcast_575_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.7 MiB)
20:14:34.467 INFO BlockManagerInfo - Added broadcast_575_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.5 MiB)
20:14:34.467 INFO SparkContext - Created broadcast 575 from newAPIHadoopFile at PathSplitSource.java:96
20:14:34.486 INFO MemoryStore - Block broadcast_576 stored as values in memory (estimated size 536.0 B, free 1917.7 MiB)
20:14:34.486 INFO MemoryStore - Block broadcast_576_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.7 MiB)
20:14:34.486 INFO BlockManagerInfo - Added broadcast_576_piece0 in memory on localhost:35739 (size: 187.0 B, free: 1919.5 MiB)
20:14:34.487 INFO SparkContext - Created broadcast 576 from broadcast at CramSource.java:114
20:14:34.487 INFO MemoryStore - Block broadcast_577 stored as values in memory (estimated size 297.9 KiB, free 1917.4 MiB)
20:14:34.493 INFO BlockManagerInfo - Removed broadcast_575_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.6 MiB)
20:14:34.494 INFO BlockManagerInfo - Removed broadcast_570_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.6 MiB)
20:14:34.494 INFO BlockManagerInfo - Removed broadcast_574_piece0 on localhost:35739 in memory (size: 187.0 B, free: 1919.6 MiB)
20:14:34.494 INFO BlockManagerInfo - Removed broadcast_566_piece0 on localhost:35739 in memory (size: 187.0 B, free: 1919.6 MiB)
20:14:34.495 INFO BlockManagerInfo - Removed broadcast_568_piece0 on localhost:35739 in memory (size: 1778.0 B, free: 1919.6 MiB)
20:14:34.495 INFO BlockManagerInfo - Removed broadcast_571_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.7 MiB)
20:14:34.495 INFO BlockManagerInfo - Removed broadcast_567_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.7 MiB)
20:14:34.496 INFO BlockManagerInfo - Removed broadcast_569_piece0 on localhost:35739 in memory (size: 103.6 KiB, free: 1919.8 MiB)
20:14:34.496 INFO BlockManagerInfo - Removed broadcast_573_piece0 on localhost:35739 in memory (size: 153.7 KiB, free: 1920.0 MiB)
20:14:34.496 INFO BlockManagerInfo - Removed broadcast_572_piece0 on localhost:35739 in memory (size: 1778.0 B, free: 1920.0 MiB)
20:14:34.499 INFO MemoryStore - Block broadcast_577_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
20:14:34.499 INFO BlockManagerInfo - Added broadcast_577_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1920.0 MiB)
20:14:34.499 INFO SparkContext - Created broadcast 577 from newAPIHadoopFile at PathSplitSource.java:96
20:14:34.516 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
20:14:34.516 INFO DAGScheduler - Got job 217 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
20:14:34.516 INFO DAGScheduler - Final stage: ResultStage 281 (collect at ReadsSparkSourceUnitTest.java:111)
20:14:34.516 INFO DAGScheduler - Parents of final stage: List()
20:14:34.517 INFO DAGScheduler - Missing parents: List()
20:14:34.517 INFO DAGScheduler - Submitting ResultStage 281 (ParallelCollectionRDD[1367] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:14:34.517 INFO MemoryStore - Block broadcast_578 stored as values in memory (estimated size 3.0 KiB, free 1919.7 MiB)
20:14:34.517 INFO MemoryStore - Block broadcast_578_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1919.7 MiB)
20:14:34.517 INFO BlockManagerInfo - Added broadcast_578_piece0 in memory on localhost:35739 (size: 1778.0 B, free: 1919.9 MiB)
20:14:34.518 INFO SparkContext - Created broadcast 578 from broadcast at DAGScheduler.scala:1580
20:14:34.518 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 281 (ParallelCollectionRDD[1367] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:14:34.518 INFO TaskSchedulerImpl - Adding task set 281.0 with 4 tasks resource profile 0
20:14:34.518 INFO TaskSetManager - Starting task 0.0 in stage 281.0 (TID 361) (localhost, executor driver, partition 0, PROCESS_LOCAL, 45675 bytes)
20:14:34.519 INFO TaskSetManager - Starting task 1.0 in stage 281.0 (TID 362) (localhost, executor driver, partition 1, PROCESS_LOCAL, 45855 bytes)
20:14:34.519 INFO TaskSetManager - Starting task 2.0 in stage 281.0 (TID 363) (localhost, executor driver, partition 2, PROCESS_LOCAL, 45795 bytes)
20:14:34.519 INFO TaskSetManager - Starting task 3.0 in stage 281.0 (TID 364) (localhost, executor driver, partition 3, PROCESS_LOCAL, 46068 bytes)
20:14:34.519 INFO Executor - Running task 0.0 in stage 281.0 (TID 361)
20:14:34.519 INFO Executor - Running task 1.0 in stage 281.0 (TID 362)
20:14:34.519 INFO Executor - Running task 3.0 in stage 281.0 (TID 364)
20:14:34.519 INFO Executor - Running task 2.0 in stage 281.0 (TID 363)
20:14:34.521 INFO Executor - Finished task 2.0 in stage 281.0 (TID 363). 39027 bytes result sent to driver
20:14:34.521 INFO Executor - Finished task 3.0 in stage 281.0 (TID 364). 39300 bytes result sent to driver
20:14:34.521 INFO Executor - Finished task 1.0 in stage 281.0 (TID 362). 39130 bytes result sent to driver
20:14:34.521 INFO Executor - Finished task 0.0 in stage 281.0 (TID 361). 38950 bytes result sent to driver
20:14:34.521 INFO TaskSetManager - Finished task 2.0 in stage 281.0 (TID 363) in 2 ms on localhost (executor driver) (1/4)
20:14:34.521 INFO TaskSetManager - Finished task 3.0 in stage 281.0 (TID 364) in 2 ms on localhost (executor driver) (2/4)
20:14:34.522 INFO TaskSetManager - Finished task 0.0 in stage 281.0 (TID 361) in 4 ms on localhost (executor driver) (3/4)
20:14:34.522 INFO TaskSetManager - Finished task 1.0 in stage 281.0 (TID 362) in 4 ms on localhost (executor driver) (4/4)
20:14:34.522 INFO TaskSchedulerImpl - Removed TaskSet 281.0, whose tasks have all completed, from pool
20:14:34.522 INFO DAGScheduler - ResultStage 281 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.005 s
20:14:34.522 INFO DAGScheduler - Job 217 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:34.522 INFO TaskSchedulerImpl - Killing all running tasks in stage 281: Stage finished
20:14:34.522 INFO DAGScheduler - Job 217 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.005826 s
20:14:34.523 INFO FileInputFormat - Total input files to process : 1
20:14:34.548 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
20:14:34.548 INFO DAGScheduler - Got job 218 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
20:14:34.548 INFO DAGScheduler - Final stage: ResultStage 282 (collect at ReadsSparkSourceUnitTest.java:112)
20:14:34.548 INFO DAGScheduler - Parents of final stage: List()
20:14:34.548 INFO DAGScheduler - Missing parents: List()
20:14:34.548 INFO DAGScheduler - Submitting ResultStage 282 (MapPartitionsRDD[1373] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:34.559 INFO MemoryStore - Block broadcast_579 stored as values in memory (estimated size 286.8 KiB, free 1919.4 MiB)
20:14:34.560 INFO MemoryStore - Block broadcast_579_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1919.3 MiB)
20:14:34.560 INFO BlockManagerInfo - Added broadcast_579_piece0 in memory on localhost:35739 (size: 103.6 KiB, free: 1919.8 MiB)
20:14:34.560 INFO SparkContext - Created broadcast 579 from broadcast at DAGScheduler.scala:1580
20:14:34.560 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 282 (MapPartitionsRDD[1373] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:34.560 INFO TaskSchedulerImpl - Adding task set 282.0 with 1 tasks resource profile 0
20:14:34.561 INFO TaskSetManager - Starting task 0.0 in stage 282.0 (TID 365) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes)
20:14:34.561 INFO Executor - Running task 0.0 in stage 282.0 (TID 365)
20:14:34.582 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
20:14:34.589 INFO Executor - Finished task 0.0 in stage 282.0 (TID 365). 154058 bytes result sent to driver
20:14:34.589 INFO TaskSetManager - Finished task 0.0 in stage 282.0 (TID 365) in 28 ms on localhost (executor driver) (1/1)
20:14:34.589 INFO TaskSchedulerImpl - Removed TaskSet 282.0, whose tasks have all completed, from pool
20:14:34.590 INFO DAGScheduler - ResultStage 282 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.042 s
20:14:34.590 INFO DAGScheduler - Job 218 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:34.590 INFO TaskSchedulerImpl - Killing all running tasks in stage 282: Stage finished
20:14:34.590 INFO DAGScheduler - Job 218 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.041733 s
20:14:34.592 INFO MemoryStore - Block broadcast_580 stored as values in memory (estimated size 297.9 KiB, free 1919.0 MiB)
20:14:34.598 INFO MemoryStore - Block broadcast_580_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.9 MiB)
20:14:34.599 INFO BlockManagerInfo - Added broadcast_580_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.8 MiB)
20:14:34.599 INFO SparkContext - Created broadcast 580 from newAPIHadoopFile at PathSplitSource.java:96
20:14:34.629 INFO MemoryStore - Block broadcast_581 stored as values in memory (estimated size 536.0 B, free 1918.9 MiB)
20:14:34.629 INFO MemoryStore - Block broadcast_581_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.9 MiB)
20:14:34.629 INFO BlockManagerInfo - Added broadcast_581_piece0 in memory on localhost:35739 (size: 187.0 B, free: 1919.8 MiB)
20:14:34.629 INFO SparkContext - Created broadcast 581 from broadcast at CramSource.java:114
20:14:34.630 INFO MemoryStore - Block broadcast_582 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
20:14:34.636 INFO MemoryStore - Block broadcast_582_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
20:14:34.636 INFO BlockManagerInfo - Added broadcast_582_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.7 MiB)
20:14:34.636 INFO SparkContext - Created broadcast 582 from newAPIHadoopFile at PathSplitSource.java:96
20:14:34.659 WARN FileSystem - Failed to initialize filesystem hdfs://bogus/path.bam: java.lang.IllegalArgumentException: java.net.UnknownHostException: bogus
20:14:34.662 INFO MemoryStore - Block broadcast_583 stored as values in memory (estimated size 298.0 KiB, free 1918.3 MiB)
20:14:34.668 INFO MemoryStore - Block broadcast_583_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.3 MiB)
20:14:34.668 INFO BlockManagerInfo - Added broadcast_583_piece0 in memory on localhost:35739 (size: 50.3 KiB, free: 1919.7 MiB)
20:14:34.669 INFO SparkContext - Created broadcast 583 from newAPIHadoopFile at PathSplitSource.java:96
20:14:34.693 INFO MemoryStore - Block broadcast_584 stored as values in memory (estimated size 298.0 KiB, free 1918.0 MiB)
20:14:34.699 INFO MemoryStore - Block broadcast_584_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.9 MiB)
20:14:34.699 INFO BlockManagerInfo - Added broadcast_584_piece0 in memory on localhost:35739 (size: 50.3 KiB, free: 1919.7 MiB)
20:14:34.699 INFO SparkContext - Created broadcast 584 from newAPIHadoopFile at PathSplitSource.java:96
20:14:34.722 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:154
20:14:34.722 INFO DAGScheduler - Got job 219 (collect at ReadsSparkSourceUnitTest.java:154) with 4 output partitions
20:14:34.722 INFO DAGScheduler - Final stage: ResultStage 283 (collect at ReadsSparkSourceUnitTest.java:154)
20:14:34.722 INFO DAGScheduler - Parents of final stage: List()
20:14:34.722 INFO DAGScheduler - Missing parents: List()
20:14:34.722 INFO DAGScheduler - Submitting ResultStage 283 (ParallelCollectionRDD[1388] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
20:14:34.723 INFO MemoryStore - Block broadcast_585 stored as values in memory (estimated size 3.0 KiB, free 1917.9 MiB)
20:14:34.723 INFO MemoryStore - Block broadcast_585_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1917.9 MiB)
20:14:34.723 INFO BlockManagerInfo - Added broadcast_585_piece0 in memory on localhost:35739 (size: 1778.0 B, free: 1919.6 MiB)
20:14:34.724 INFO SparkContext - Created broadcast 585 from broadcast at DAGScheduler.scala:1580
20:14:34.724 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 283 (ParallelCollectionRDD[1388] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
20:14:34.724 INFO TaskSchedulerImpl - Adding task set 283.0 with 4 tasks resource profile 0
20:14:34.724 INFO TaskSetManager - Starting task 0.0 in stage 283.0 (TID 366) (localhost, executor driver, partition 0, PROCESS_LOCAL, 169886 bytes)
20:14:34.725 INFO TaskSetManager - Starting task 1.0 in stage 283.0 (TID 367) (localhost, executor driver, partition 1, PROCESS_LOCAL, 170093 bytes)
20:14:34.725 INFO TaskSetManager - Starting task 2.0 in stage 283.0 (TID 368) (localhost, executor driver, partition 2, PROCESS_LOCAL, 170081 bytes)
20:14:34.725 INFO TaskSetManager - Starting task 3.0 in stage 283.0 (TID 369) (localhost, executor driver, partition 3, PROCESS_LOCAL, 169419 bytes)
20:14:34.725 INFO Executor - Running task 3.0 in stage 283.0 (TID 369)
20:14:34.725 INFO Executor - Running task 2.0 in stage 283.0 (TID 368)
20:14:34.725 INFO Executor - Running task 1.0 in stage 283.0 (TID 367)
20:14:34.725 INFO Executor - Running task 0.0 in stage 283.0 (TID 366)
20:14:34.727 INFO Executor - Finished task 2.0 in stage 283.0 (TID 368). 163313 bytes result sent to driver
20:14:34.727 INFO Executor - Finished task 1.0 in stage 283.0 (TID 367). 163325 bytes result sent to driver
20:14:34.727 INFO Executor - Finished task 0.0 in stage 283.0 (TID 366). 163118 bytes result sent to driver
20:14:34.728 INFO TaskSetManager - Finished task 2.0 in stage 283.0 (TID 368) in 3 ms on localhost (executor driver) (1/4)
20:14:34.728 INFO TaskSetManager - Finished task 1.0 in stage 283.0 (TID 367) in 4 ms on localhost (executor driver) (2/4)
20:14:34.728 INFO TaskSetManager - Finished task 0.0 in stage 283.0 (TID 366) in 4 ms on localhost (executor driver) (3/4)
20:14:34.729 INFO Executor - Finished task 3.0 in stage 283.0 (TID 369). 162694 bytes result sent to driver
20:14:34.729 INFO TaskSetManager - Finished task 3.0 in stage 283.0 (TID 369) in 4 ms on localhost (executor driver) (4/4)
20:14:34.729 INFO TaskSchedulerImpl - Removed TaskSet 283.0, whose tasks have all completed, from pool
20:14:34.729 INFO DAGScheduler - ResultStage 283 (collect at ReadsSparkSourceUnitTest.java:154) finished in 0.006 s
20:14:34.729 INFO DAGScheduler - Job 219 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:34.729 INFO TaskSchedulerImpl - Killing all running tasks in stage 283: Stage finished
20:14:34.729 INFO DAGScheduler - Job 219 finished: collect at ReadsSparkSourceUnitTest.java:154, took 0.007247 s
20:14:34.738 INFO FileInputFormat - Total input files to process : 2
20:14:34.776 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:155
20:14:34.776 INFO DAGScheduler - Got job 220 (collect at ReadsSparkSourceUnitTest.java:155) with 2 output partitions
20:14:34.776 INFO DAGScheduler - Final stage: ResultStage 284 (collect at ReadsSparkSourceUnitTest.java:155)
20:14:34.776 INFO DAGScheduler - Parents of final stage: List()
20:14:34.776 INFO DAGScheduler - Missing parents: List()
20:14:34.776 INFO DAGScheduler - Submitting ResultStage 284 (MapPartitionsRDD[1395] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:34.793 INFO MemoryStore - Block broadcast_586 stored as values in memory (estimated size 426.2 KiB, free 1917.5 MiB)
20:14:34.794 INFO MemoryStore - Block broadcast_586_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1917.3 MiB)
20:14:34.794 INFO BlockManagerInfo - Added broadcast_586_piece0 in memory on localhost:35739 (size: 153.7 KiB, free: 1919.5 MiB)
20:14:34.794 INFO SparkContext - Created broadcast 586 from broadcast at DAGScheduler.scala:1580
20:14:34.795 INFO DAGScheduler - Submitting 2 missing tasks from ResultStage 284 (MapPartitionsRDD[1395] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0, 1))
20:14:34.795 INFO TaskSchedulerImpl - Adding task set 284.0 with 2 tasks resource profile 0
20:14:34.795 INFO TaskSetManager - Starting task 0.0 in stage 284.0 (TID 370) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7928 bytes)
20:14:34.795 INFO TaskSetManager - Starting task 1.0 in stage 284.0 (TID 371) (localhost, executor driver, partition 1, PROCESS_LOCAL, 7928 bytes)
20:14:34.795 INFO Executor - Running task 0.0 in stage 284.0 (TID 370)
20:14:34.795 INFO Executor - Running task 1.0 in stage 284.0 (TID 371)
20:14:34.825 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.sharded.bam/part-r-00001.bam:0+131967
20:14:34.825 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.sharded.bam/part-r-00000.bam:0+131967
20:14:34.838 INFO Executor - Finished task 1.0 in stage 284.0 (TID 371). 325590 bytes result sent to driver
20:14:34.838 INFO Executor - Finished task 0.0 in stage 284.0 (TID 370). 325590 bytes result sent to driver
20:14:34.839 INFO TaskSetManager - Finished task 1.0 in stage 284.0 (TID 371) in 44 ms on localhost (executor driver) (1/2)
20:14:34.839 INFO TaskSetManager - Finished task 0.0 in stage 284.0 (TID 370) in 44 ms on localhost (executor driver) (2/2)
20:14:34.839 INFO TaskSchedulerImpl - Removed TaskSet 284.0, whose tasks have all completed, from pool
20:14:34.840 INFO DAGScheduler - ResultStage 284 (collect at ReadsSparkSourceUnitTest.java:155) finished in 0.064 s
20:14:34.840 INFO DAGScheduler - Job 220 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:34.840 INFO TaskSchedulerImpl - Killing all running tasks in stage 284: Stage finished
20:14:34.840 INFO DAGScheduler - Job 220 finished: collect at ReadsSparkSourceUnitTest.java:155, took 0.064027 s
20:14:34.848 INFO MiniDFSCluster - starting cluster: numNameNodes=1, numDataNodes=1
20:14:34.849 INFO NameNode - Formatting using clusterid: testClusterID
20:14:34.849 INFO FSEditLog - Edit logging is async:true
20:14:34.856 INFO FSNamesystem - KeyProvider: null
20:14:34.856 INFO FSNamesystem - fsLock is fair: true
20:14:34.856 INFO FSNamesystem - Detailed lock hold time metrics enabled: false
20:14:34.856 INFO FSNamesystem - fsOwner = runner (auth:SIMPLE)
20:14:34.856 INFO FSNamesystem - supergroup = supergroup
20:14:34.856 INFO FSNamesystem - isPermissionEnabled = true
20:14:34.856 INFO FSNamesystem - isStoragePolicyEnabled = true
20:14:34.856 INFO FSNamesystem - HA Enabled: false
20:14:34.856 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:14:34.856 INFO DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
20:14:34.856 INFO DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
20:14:34.857 INFO BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
20:14:34.857 INFO BlockManager - The block deletion will start around 2025 Feb 10 20:14:34
20:14:34.857 INFO GSet - Computing capacity for map BlocksMap
20:14:34.857 INFO GSet - VM type = 64-bit
20:14:34.857 INFO GSet - 2.0% max memory 3.4 GB = 70 MB
20:14:34.857 INFO GSet - capacity = 2^23 = 8388608 entries
20:14:34.862 INFO BlockManager - Storage policy satisfier is disabled
20:14:34.862 INFO BlockManager - dfs.block.access.token.enable = false
20:14:34.862 INFO BlockManagerInfo - Removed broadcast_586_piece0 on localhost:35739 in memory (size: 153.7 KiB, free: 1919.6 MiB)
20:14:34.862 INFO BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
20:14:34.862 INFO BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
20:14:34.862 INFO BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
20:14:34.862 INFO BlockManager - defaultReplication = 1
20:14:34.862 INFO BlockManager - maxReplication = 512
20:14:34.862 INFO BlockManager - minReplication = 1
20:14:34.862 INFO BlockManager - maxReplicationStreams = 2
20:14:34.862 INFO BlockManager - redundancyRecheckInterval = 3000ms
20:14:34.862 INFO BlockManager - encryptDataTransfer = false
20:14:34.862 INFO BlockManager - maxNumBlocksToLog = 1000
20:14:34.862 INFO GSet - Computing capacity for map INodeMap
20:14:34.862 INFO GSet - VM type = 64-bit
20:14:34.862 INFO GSet - 1.0% max memory 3.4 GB = 35 MB
20:14:34.862 INFO GSet - capacity = 2^22 = 4194304 entries
20:14:34.863 INFO BlockManagerInfo - Removed broadcast_578_piece0 on localhost:35739 in memory (size: 1778.0 B, free: 1919.7 MiB)
20:14:34.863 INFO BlockManagerInfo - Removed broadcast_577_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.7 MiB)
20:14:34.863 INFO FSDirectory - ACLs enabled? true
20:14:34.863 INFO FSDirectory - POSIX ACL inheritance enabled? true
20:14:34.863 INFO FSDirectory - XAttrs enabled? true
20:14:34.863 INFO NameNode - Caching file names occurring more than 10 times
20:14:34.863 INFO SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
20:14:34.863 INFO SnapshotManager - SkipList is disabled
20:14:34.863 INFO GSet - Computing capacity for map cachedBlocks
20:14:34.863 INFO GSet - VM type = 64-bit
20:14:34.863 INFO GSet - 0.25% max memory 3.4 GB = 8.8 MB
20:14:34.863 INFO GSet - capacity = 2^20 = 1048576 entries
20:14:34.864 INFO BlockManagerInfo - Removed broadcast_580_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.7 MiB)
20:14:34.864 INFO TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
20:14:34.864 INFO TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
20:14:34.864 INFO TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
20:14:34.864 INFO BlockManagerInfo - Removed broadcast_576_piece0 on localhost:35739 in memory (size: 187.0 B, free: 1919.7 MiB)
20:14:34.864 INFO FSNamesystem - Retry cache on namenode is enabled
20:14:34.864 INFO FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
20:14:34.864 INFO GSet - Computing capacity for map NameNodeRetryCache
20:14:34.864 INFO GSet - VM type = 64-bit
20:14:34.864 INFO GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
20:14:34.864 INFO GSet - capacity = 2^17 = 131072 entries
20:14:34.865 INFO BlockManagerInfo - Removed broadcast_584_piece0 on localhost:35739 in memory (size: 50.3 KiB, free: 1919.8 MiB)
20:14:34.865 INFO FSImage - Allocated new BlockPoolId: BP-1154891655-10.1.0.79-1739218474865
20:14:34.865 INFO BlockManagerInfo - Removed broadcast_581_piece0 on localhost:35739 in memory (size: 187.0 B, free: 1919.8 MiB)
20:14:34.865 INFO BlockManagerInfo - Removed broadcast_583_piece0 on localhost:35739 in memory (size: 50.3 KiB, free: 1919.8 MiB)
20:14:34.866 INFO BlockManagerInfo - Removed broadcast_579_piece0 on localhost:35739 in memory (size: 103.6 KiB, free: 1919.9 MiB)
20:14:34.866 INFO BlockManagerInfo - Removed broadcast_585_piece0 on localhost:35739 in memory (size: 1778.0 B, free: 1920.0 MiB)
20:14:34.866 INFO BlockManagerInfo - Removed broadcast_582_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1920.0 MiB)
20:14:34.868 INFO Storage - Storage directory /tmp/minicluster_storage12869977849631780213/name-0-1 has been successfully formatted.
20:14:34.869 INFO Storage - Storage directory /tmp/minicluster_storage12869977849631780213/name-0-2 has been successfully formatted.
20:14:34.880 INFO FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage12869977849631780213/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression
20:14:34.881 INFO FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage12869977849631780213/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression
20:14:34.883 INFO FSImageFormatProtobuf - Image file /tmp/minicluster_storage12869977849631780213/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
20:14:34.886 INFO FSImageFormatProtobuf - Image file /tmp/minicluster_storage12869977849631780213/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
20:14:34.887 INFO NNStorageRetentionManager - Going to retain 1 images with txid >= 0
20:14:34.890 INFO FSNamesystem - Stopping services started for active state
20:14:34.890 INFO FSNamesystem - Stopping services started for standby state
20:14:34.890 INFO NameNode - createNameNode []
20:14:34.891 WARN MetricsConfig - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
20:14:34.892 INFO MetricsSystemImpl - Scheduled Metric snapshot period at 10 second(s).
20:14:34.892 INFO MetricsSystemImpl - NameNode metrics system started
20:14:34.893 INFO NameNodeUtils - fs.defaultFS is hdfs://127.0.0.1:0
20:14:34.903 INFO JvmPauseMonitor - Starting JVM pause monitor
20:14:34.903 INFO DFSUtil - Filter initializers set : org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
20:14:34.903 INFO DFSUtil - Starting Web-server for hdfs at: http://localhost:0
20:14:34.904 WARN AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
20:14:34.905 WARN HttpRequestLog - Jetty request log can only be enabled using Log4j
20:14:34.906 INFO HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
20:14:34.906 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
20:14:34.906 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
20:14:34.907 INFO HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context hdfs
20:14:34.907 INFO HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context static
20:14:34.907 INFO HttpServer2 - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
20:14:34.908 INFO HttpServer2 - Jetty bound to port 32819
20:14:34.908 INFO Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
20:14:34.909 INFO session - DefaultSessionIdManager workerName=node0
20:14:34.909 INFO session - No SessionScavenger set, using defaults
20:14:34.909 INFO session - node0 Scavenging every 600000ms
20:14:34.910 WARN AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
20:14:34.910 INFO ContextHandler - Started o.e.j.s.ServletContextHandler@673c937b{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
20:14:35.000 INFO ContextHandler - Started o.e.j.w.WebAppContext@392005e7{hdfs,/,file:///tmp/jetty-localhost-32819-hadoop-hdfs-3_3_6-tests_jar-_-any-2606245045016362609/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
20:14:35.000 INFO AbstractConnector - Started ServerConnector@7bfc64c2{HTTP/1.1, (http/1.1)}{localhost:32819}
20:14:35.000 INFO Server - Started @79848ms
20:14:35.001 INFO FSEditLog - Edit logging is async:true
20:14:35.009 INFO FSNamesystem - KeyProvider: null
20:14:35.009 INFO FSNamesystem - fsLock is fair: true
20:14:35.009 INFO FSNamesystem - Detailed lock hold time metrics enabled: false
20:14:35.009 INFO FSNamesystem - fsOwner = runner (auth:SIMPLE)
20:14:35.009 INFO FSNamesystem - supergroup = supergroup
20:14:35.009 INFO FSNamesystem - isPermissionEnabled = true
20:14:35.009 INFO FSNamesystem - isStoragePolicyEnabled = true
20:14:35.009 INFO FSNamesystem - HA Enabled: false
20:14:35.009 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:14:35.009 INFO DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
20:14:35.009 INFO DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
20:14:35.009 INFO BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
20:14:35.009 INFO BlockManager - The block deletion will start around 2025 Feb 10 20:14:35
20:14:35.009 INFO GSet - Computing capacity for map BlocksMap
20:14:35.009 INFO GSet - VM type = 64-bit
20:14:35.009 INFO GSet - 2.0% max memory 3.4 GB = 70 MB
20:14:35.009 INFO GSet - capacity = 2^23 = 8388608 entries
20:14:35.015 INFO BlockManager - Storage policy satisfier is disabled
20:14:35.015 INFO BlockManager - dfs.block.access.token.enable = false
20:14:35.015 INFO BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
20:14:35.015 INFO BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
20:14:35.015 INFO BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
20:14:35.015 INFO BlockManager - defaultReplication = 1
20:14:35.015 INFO BlockManager - maxReplication = 512
20:14:35.015 INFO BlockManager - minReplication = 1
20:14:35.015 INFO BlockManager - maxReplicationStreams = 2
20:14:35.015 INFO BlockManager - redundancyRecheckInterval = 3000ms
20:14:35.015 INFO BlockManager - encryptDataTransfer = false
20:14:35.015 INFO BlockManager - maxNumBlocksToLog = 1000
20:14:35.015 INFO GSet - Computing capacity for map INodeMap
20:14:35.015 INFO GSet - VM type = 64-bit
20:14:35.015 INFO GSet - 1.0% max memory 3.4 GB = 35 MB
20:14:35.015 INFO GSet - capacity = 2^22 = 4194304 entries
20:14:35.020 INFO FSDirectory - ACLs enabled? true
20:14:35.020 INFO FSDirectory - POSIX ACL inheritance enabled? true
20:14:35.020 INFO FSDirectory - XAttrs enabled? true
20:14:35.020 INFO NameNode - Caching file names occurring more than 10 times
20:14:35.020 INFO SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
20:14:35.020 INFO SnapshotManager - SkipList is disabled
20:14:35.020 INFO GSet - Computing capacity for map cachedBlocks
20:14:35.020 INFO GSet - VM type = 64-bit
20:14:35.020 INFO GSet - 0.25% max memory 3.4 GB = 8.8 MB
20:14:35.020 INFO GSet - capacity = 2^20 = 1048576 entries
20:14:35.020 INFO TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
20:14:35.020 INFO TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
20:14:35.020 INFO TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
20:14:35.020 INFO FSNamesystem - Retry cache on namenode is enabled
20:14:35.020 INFO FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
20:14:35.020 INFO GSet - Computing capacity for map NameNodeRetryCache
20:14:35.020 INFO GSet - VM type = 64-bit
20:14:35.020 INFO GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
20:14:35.020 INFO GSet - capacity = 2^17 = 131072 entries
20:14:35.022 INFO Storage - Lock on /tmp/minicluster_storage12869977849631780213/name-0-1/in_use.lock acquired by nodename 14919@fv-az1120-225
20:14:35.022 INFO Storage - Lock on /tmp/minicluster_storage12869977849631780213/name-0-2/in_use.lock acquired by nodename 14919@fv-az1120-225
20:14:35.023 INFO FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage12869977849631780213/name-0-1/current
20:14:35.023 INFO FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage12869977849631780213/name-0-2/current
20:14:35.023 INFO FSImage - No edit log streams selected.
20:14:35.023 INFO FSImage - Planning to load image: FSImageFile(file=/tmp/minicluster_storage12869977849631780213/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
20:14:35.024 INFO FSImageFormatPBINode - Loading 1 INodes.
20:14:35.024 INFO FSImageFormatPBINode - Successfully loaded 1 inodes
20:14:35.024 INFO FSImageFormatPBINode - Completed update blocks map and name cache, total waiting duration 0ms.
20:14:35.024 INFO FSImageFormatProtobuf - Loaded FSImage in 0 seconds.
20:14:35.025 INFO FSImage - Loaded image for txid 0 from /tmp/minicluster_storage12869977849631780213/name-0-1/current/fsimage_0000000000000000000
20:14:35.025 INFO FSNamesystem - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
20:14:35.025 INFO FSEditLog - Starting log segment at 1
20:14:35.028 INFO NameCache - initialized with 0 entries 0 lookups
20:14:35.028 INFO FSNamesystem - Finished loading FSImage in 7 msecs
20:14:35.028 INFO NameNode - RPC server is binding to localhost:0
20:14:35.028 INFO NameNode - Enable NameNode state context:false
20:14:35.028 INFO CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
20:14:35.028 INFO Server - Listener at localhost:41253
20:14:35.029 INFO Server - Starting Socket Reader #1 for port 0
20:14:35.030 INFO NameNode - Clients are to use localhost:41253 to access this namenode/service.
20:14:35.031 INFO FSNamesystem - Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans.
20:14:35.041 INFO LeaseManager - Number of blocks under construction: 0
20:14:35.042 INFO DatanodeAdminDefaultMonitor - Initialized the Default Decommission and Maintenance monitor
20:14:35.042 INFO BlockManager - Start MarkedDeleteBlockScrubber thread
20:14:35.042 INFO BlockManager - initializing replication queues
20:14:35.043 INFO StateChange - STATE* Leaving safe mode after 0 secs
20:14:35.043 INFO StateChange - STATE* Network topology has 0 racks and 0 datanodes
20:14:35.043 INFO StateChange - STATE* UnderReplicatedBlocks has 0 blocks
20:14:35.044 INFO Server - IPC Server Responder: starting
20:14:35.045 INFO Server - IPC Server listener on 0: starting
20:14:35.046 INFO NameNode - NameNode RPC up at: localhost/127.0.0.1:41253
20:14:35.046 WARN MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
20:14:35.046 INFO FSNamesystem - Starting services required for active state
20:14:35.046 INFO FSDirectory - Initializing quota with 12 thread(s)
20:14:35.047 INFO FSDirectory - Quota initialization completed in 0 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
20:14:35.047 INFO BlockManager - Total number of blocks = 0
20:14:35.047 INFO BlockManager - Number of invalid blocks = 0
20:14:35.047 INFO BlockManager - Number of under-replicated blocks = 0
20:14:35.047 INFO BlockManager - Number of over-replicated blocks = 0
20:14:35.047 INFO BlockManager - Number of blocks being written = 0
20:14:35.047 INFO CacheReplicationMonitor - Starting CacheReplicationMonitor with interval 30000 milliseconds
20:14:35.047 INFO StateChange - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 5 msec
20:14:35.048 INFO MiniDFSCluster - Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/tmp/minicluster_storage12869977849631780213/data/data1,[DISK]file:/tmp/minicluster_storage12869977849631780213/data/data2
20:14:35.049 INFO ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage12869977849631780213/data/data1
20:14:35.049 INFO ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage12869977849631780213/data/data2
20:14:35.060 INFO MetricsSystemImpl - DataNode metrics system started (again)
20:14:35.060 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:14:35.060 INFO BlockScanner - Initialized block scanner with targetBytesPerSec 1048576
20:14:35.061 INFO DataNode - Configured hostname is 127.0.0.1
20:14:35.061 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:14:35.061 INFO DataNode - Starting DataNode with maxLockedMemory = 0
20:14:35.061 INFO DataNode - Opened streaming server at /127.0.0.1:36287
20:14:35.061 INFO DataNode - Balancing bandwidth is 104857600 bytes/s
20:14:35.061 INFO DataNode - Number threads for balancing is 100
20:14:35.062 WARN AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
20:14:35.063 WARN HttpRequestLog - Jetty request log can only be enabled using Log4j
20:14:35.064 INFO HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
20:14:35.065 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
20:14:35.065 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
20:14:35.066 INFO HttpServer2 - Jetty bound to port 32919
20:14:35.066 INFO Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
20:14:35.066 INFO session - DefaultSessionIdManager workerName=node0
20:14:35.066 INFO session - No SessionScavenger set, using defaults
20:14:35.066 INFO session - node0 Scavenging every 660000ms
20:14:35.067 INFO ContextHandler - Started o.e.j.s.ServletContextHandler@76ff7be0{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
20:14:35.194 INFO ContextHandler - Started o.e.j.w.WebAppContext@22915312{datanode,/,file:///tmp/jetty-localhost-32919-hadoop-hdfs-3_3_6-tests_jar-_-any-13111182207323427928/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
20:14:35.194 INFO AbstractConnector - Started ServerConnector@48417074{HTTP/1.1, (http/1.1)}{localhost:32919}
20:14:35.194 INFO Server - Started @80042ms
20:14:35.195 WARN DatanodeHttpServer - Got null for restCsrfPreventionFilter - will not do any filtering.
20:14:35.196 INFO DatanodeHttpServer - Listening HTTP traffic on /127.0.0.1:44895
20:14:35.196 INFO DataNode - dnUserName = runner
20:14:35.196 INFO DataNode - supergroup = supergroup
20:14:35.196 INFO JvmPauseMonitor - Starting JVM pause monitor
20:14:35.196 INFO CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
20:14:35.196 INFO Server - Listener at localhost:35445
20:14:35.197 INFO Server - Starting Socket Reader #1 for port 0
20:14:35.198 INFO DataNode - Opened IPC server at /127.0.0.1:35445
20:14:35.208 INFO DataNode - Refresh request received for nameservices: null
20:14:35.209 INFO DataNode - Starting BPOfferServices for nameservices: <default>
20:14:35.209 WARN MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
20:14:35.209 INFO DataNode - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:41253 starting to offer service
20:14:35.210 INFO Server - IPC Server Responder: starting
20:14:35.210 INFO Server - IPC Server listener on 0: starting
20:14:35.211 INFO DataNode - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:41253
20:14:35.211 INFO Storage - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2)
20:14:35.212 INFO Storage - Lock on /tmp/minicluster_storage12869977849631780213/data/data1/in_use.lock acquired by nodename 14919@fv-az1120-225
20:14:35.212 INFO Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage12869977849631780213/data/data1 is not formatted for namespace 520428829. Formatting...
20:14:35.212 INFO Storage - Generated new storageID DS-524ca767-0408-4ee4-8a45-16a3df81ab63 for directory /tmp/minicluster_storage12869977849631780213/data/data1
20:14:35.213 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=datanodeReport src=null dst=null perm=null proto=rpc
20:14:35.213 INFO MiniDFSCluster - dnInfo.length != numDataNodes
20:14:35.213 INFO MiniDFSCluster - Waiting for cluster to become active
20:14:35.214 INFO Storage - Lock on /tmp/minicluster_storage12869977849631780213/data/data2/in_use.lock acquired by nodename 14919@fv-az1120-225
20:14:35.214 INFO Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage12869977849631780213/data/data2 is not formatted for namespace 520428829. Formatting...
20:14:35.214 INFO Storage - Generated new storageID DS-1298e5ca-89b6-43fd-b593-28ee71074b80 for directory /tmp/minicluster_storage12869977849631780213/data/data2
20:14:35.226 INFO Storage - Analyzing storage directories for bpid BP-1154891655-10.1.0.79-1739218474865
20:14:35.226 INFO Storage - Locking is disabled for /tmp/minicluster_storage12869977849631780213/data/data1/current/BP-1154891655-10.1.0.79-1739218474865
20:14:35.226 INFO Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage12869977849631780213/data/data1 and block pool id BP-1154891655-10.1.0.79-1739218474865 is not formatted. Formatting ...
20:14:35.227 INFO Storage - Formatting block pool BP-1154891655-10.1.0.79-1739218474865 directory /tmp/minicluster_storage12869977849631780213/data/data1/current/BP-1154891655-10.1.0.79-1739218474865/current
20:14:35.238 INFO Storage - Analyzing storage directories for bpid BP-1154891655-10.1.0.79-1739218474865
20:14:35.238 INFO Storage - Locking is disabled for /tmp/minicluster_storage12869977849631780213/data/data2/current/BP-1154891655-10.1.0.79-1739218474865
20:14:35.238 INFO Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage12869977849631780213/data/data2 and block pool id BP-1154891655-10.1.0.79-1739218474865 is not formatted. Formatting ...
20:14:35.238 INFO Storage - Formatting block pool BP-1154891655-10.1.0.79-1739218474865 directory /tmp/minicluster_storage12869977849631780213/data/data2/current/BP-1154891655-10.1.0.79-1739218474865/current
20:14:35.239 INFO DataNode - Setting up storage: nsid=520428829;bpid=BP-1154891655-10.1.0.79-1739218474865;lv=-57;nsInfo=lv=-66;cid=testClusterID;nsid=520428829;c=1739218474865;bpid=BP-1154891655-10.1.0.79-1739218474865;dnuuid=null
20:14:35.240 INFO DataNode - Generated and persisted new Datanode UUID c9366349-e487-4106-97fc-5a72bfa5e390
20:14:35.240 INFO FsDatasetImpl - The datanode lock is a read write lock
20:14:35.241 INFO FsDatasetImpl - Added new volume: DS-524ca767-0408-4ee4-8a45-16a3df81ab63
20:14:35.241 INFO FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage12869977849631780213/data/data1, StorageType: DISK
20:14:35.242 INFO FsDatasetImpl - Added new volume: DS-1298e5ca-89b6-43fd-b593-28ee71074b80
20:14:35.242 INFO FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage12869977849631780213/data/data2, StorageType: DISK
20:14:35.242 INFO MemoryMappableBlockLoader - Initializing cache loader: MemoryMappableBlockLoader.
20:14:35.242 INFO FsDatasetImpl - Registered FSDatasetState MBean
20:14:35.243 INFO FsDatasetImpl - Adding block pool BP-1154891655-10.1.0.79-1739218474865
20:14:35.243 INFO FsDatasetImpl - Scanning block pool BP-1154891655-10.1.0.79-1739218474865 on volume /tmp/minicluster_storage12869977849631780213/data/data1...
20:14:35.243 INFO FsDatasetImpl - Scanning block pool BP-1154891655-10.1.0.79-1739218474865 on volume /tmp/minicluster_storage12869977849631780213/data/data2...
20:14:35.243 WARN FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage12869977849631780213/data/data1/current/BP-1154891655-10.1.0.79-1739218474865/current, will proceed with Du for space computation calculation,
20:14:35.243 WARN FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage12869977849631780213/data/data2/current/BP-1154891655-10.1.0.79-1739218474865/current, will proceed with Du for space computation calculation,
20:14:35.257 INFO FsDatasetImpl - Time taken to scan block pool BP-1154891655-10.1.0.79-1739218474865 on /tmp/minicluster_storage12869977849631780213/data/data1: 15ms
20:14:35.258 INFO FsDatasetImpl - Time taken to scan block pool BP-1154891655-10.1.0.79-1739218474865 on /tmp/minicluster_storage12869977849631780213/data/data2: 14ms
20:14:35.258 INFO FsDatasetImpl - Total time to scan all replicas for block pool BP-1154891655-10.1.0.79-1739218474865: 15ms
20:14:35.258 INFO FsDatasetImpl - Adding replicas to map for block pool BP-1154891655-10.1.0.79-1739218474865 on volume /tmp/minicluster_storage12869977849631780213/data/data1...
20:14:35.258 INFO BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage12869977849631780213/data/data1/current/BP-1154891655-10.1.0.79-1739218474865/current/replicas doesn't exist
20:14:35.258 INFO FsDatasetImpl - Time to add replicas to map for block pool BP-1154891655-10.1.0.79-1739218474865 on volume /tmp/minicluster_storage12869977849631780213/data/data1: 0ms
20:14:35.258 INFO FsDatasetImpl - Adding replicas to map for block pool BP-1154891655-10.1.0.79-1739218474865 on volume /tmp/minicluster_storage12869977849631780213/data/data2...
20:14:35.258 INFO BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage12869977849631780213/data/data2/current/BP-1154891655-10.1.0.79-1739218474865/current/replicas doesn't exist
20:14:35.258 INFO FsDatasetImpl - Time to add replicas to map for block pool BP-1154891655-10.1.0.79-1739218474865 on volume /tmp/minicluster_storage12869977849631780213/data/data2: 0ms
20:14:35.258 INFO FsDatasetImpl - Total time to add all replicas to map for block pool BP-1154891655-10.1.0.79-1739218474865: 1ms
20:14:35.258 INFO ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage12869977849631780213/data/data1
20:14:35.259 INFO DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage12869977849631780213/data/data1
20:14:35.259 INFO ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage12869977849631780213/data/data2
20:14:35.259 INFO DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage12869977849631780213/data/data2
20:14:35.259 WARN DirectoryScanner - dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value above 1000 ms/sec. Assuming default value of -1
20:14:35.259 INFO VolumeScanner - Now scanning bpid BP-1154891655-10.1.0.79-1739218474865 on volume /tmp/minicluster_storage12869977849631780213/data/data1
20:14:35.259 INFO DirectoryScanner - Periodic Directory Tree Verification scan starting in 14641840ms with interval of 21600000ms and throttle limit of -1ms/s
20:14:35.260 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage12869977849631780213/data/data1, DS-524ca767-0408-4ee4-8a45-16a3df81ab63): finished scanning block pool BP-1154891655-10.1.0.79-1739218474865
20:14:35.260 INFO VolumeScanner - Now scanning bpid BP-1154891655-10.1.0.79-1739218474865 on volume /tmp/minicluster_storage12869977849631780213/data/data2
20:14:35.260 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage12869977849631780213/data/data2, DS-1298e5ca-89b6-43fd-b593-28ee71074b80): finished scanning block pool BP-1154891655-10.1.0.79-1739218474865
20:14:35.265 INFO DataNode - Block pool BP-1154891655-10.1.0.79-1739218474865 (Datanode Uuid c9366349-e487-4106-97fc-5a72bfa5e390) service to localhost/127.0.0.1:41253 beginning handshake with NN
20:14:35.265 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage12869977849631780213/data/data2, DS-1298e5ca-89b6-43fd-b593-28ee71074b80): no suitable block pools found to scan. Waiting 1814399994 ms.
20:14:35.265 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage12869977849631780213/data/data1, DS-524ca767-0408-4ee4-8a45-16a3df81ab63): no suitable block pools found to scan. Waiting 1814399994 ms.
20:14:35.266 INFO StateChange - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:36287, datanodeUuid=c9366349-e487-4106-97fc-5a72bfa5e390, infoPort=44895, infoSecurePort=0, ipcPort=35445, storageInfo=lv=-57;cid=testClusterID;nsid=520428829;c=1739218474865) storage c9366349-e487-4106-97fc-5a72bfa5e390
20:14:35.266 INFO NetworkTopology - Adding a new node: /default-rack/127.0.0.1:36287
20:14:35.266 INFO BlockReportLeaseManager - Registered DN c9366349-e487-4106-97fc-5a72bfa5e390 (127.0.0.1:36287).
20:14:35.266 INFO DataNode - Block pool BP-1154891655-10.1.0.79-1739218474865 (Datanode Uuid c9366349-e487-4106-97fc-5a72bfa5e390) service to localhost/127.0.0.1:41253 successfully registered with NN
20:14:35.270 INFO DataNode - For namenode localhost/127.0.0.1:41253 using BLOCKREPORT_INTERVAL of 21600000msecs CACHEREPORT_INTERVAL of 10000msecs Initial delay: 0msecs; heartBeatInterval=3000
20:14:35.270 INFO DatanodeDescriptor - Adding new storage ID DS-524ca767-0408-4ee4-8a45-16a3df81ab63 for DN 127.0.0.1:36287
20:14:35.270 INFO DatanodeDescriptor - Adding new storage ID DS-1298e5ca-89b6-43fd-b593-28ee71074b80 for DN 127.0.0.1:36287
20:14:35.271 INFO DataNode - After receiving heartbeat response, updating state of namenode localhost:41253 to active
20:14:35.271 INFO BlockStateChange - BLOCK* processReport 0xb5ad589c026e4b26 with lease ID 0xc12c23dabac693b4: Processing first storage report for DS-1298e5ca-89b6-43fd-b593-28ee71074b80 from datanode DatanodeRegistration(127.0.0.1:36287, datanodeUuid=c9366349-e487-4106-97fc-5a72bfa5e390, infoPort=44895, infoSecurePort=0, ipcPort=35445, storageInfo=lv=-57;cid=testClusterID;nsid=520428829;c=1739218474865)
20:14:35.271 INFO BlockStateChange - BLOCK* processReport 0xb5ad589c026e4b26 with lease ID 0xc12c23dabac693b4: from storage DS-1298e5ca-89b6-43fd-b593-28ee71074b80 node DatanodeRegistration(127.0.0.1:36287, datanodeUuid=c9366349-e487-4106-97fc-5a72bfa5e390, infoPort=44895, infoSecurePort=0, ipcPort=35445, storageInfo=lv=-57;cid=testClusterID;nsid=520428829;c=1739218474865), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0
20:14:35.271 INFO BlockStateChange - BLOCK* processReport 0xb5ad589c026e4b26 with lease ID 0xc12c23dabac693b4: Processing first storage report for DS-524ca767-0408-4ee4-8a45-16a3df81ab63 from datanode DatanodeRegistration(127.0.0.1:36287, datanodeUuid=c9366349-e487-4106-97fc-5a72bfa5e390, infoPort=44895, infoSecurePort=0, ipcPort=35445, storageInfo=lv=-57;cid=testClusterID;nsid=520428829;c=1739218474865)
20:14:35.271 INFO BlockStateChange - BLOCK* processReport 0xb5ad589c026e4b26 with lease ID 0xc12c23dabac693b4: from storage DS-524ca767-0408-4ee4-8a45-16a3df81ab63 node DatanodeRegistration(127.0.0.1:36287, datanodeUuid=c9366349-e487-4106-97fc-5a72bfa5e390, infoPort=44895, infoSecurePort=0, ipcPort=35445, storageInfo=lv=-57;cid=testClusterID;nsid=520428829;c=1739218474865), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
20:14:35.272 INFO DataNode - Successfully sent block report 0xb5ad589c026e4b26 with lease ID 0xc12c23dabac693b4 to namenode: localhost/127.0.0.1:41253, containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 0 msecs to generate and 1 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
20:14:35.272 INFO DataNode - Got finalize command for block pool BP-1154891655-10.1.0.79-1739218474865
20:14:35.272 INFO DataNode - Starting IBR Task Handler.
20:14:35.313 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=datanodeReport src=null dst=null perm=null proto=rpc
20:14:35.314 INFO MiniDFSCluster - Cluster is active
20:14:35.316 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.318 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/user/runner/hdfs.cram dst=null perm=runner:supergroup:rw-r--r-- proto=rpc
20:14:35.319 INFO StateChange - BLOCK* allocate blk_1073741825_1001, replicas=127.0.0.1:36287 for /user/runner/hdfs.cram
20:14:35.320 INFO DataNode - Receiving BP-1154891655-10.1.0.79-1739218474865:blk_1073741825_1001 src: /127.0.0.1:44024 dest: /127.0.0.1:36287
20:14:35.321 INFO clienttrace - src: /127.0.0.1:44024, dest: /127.0.0.1:36287, bytes: 50619, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-569958767_1, offset: 0, srvID: c9366349-e487-4106-97fc-5a72bfa5e390, blockid: BP-1154891655-10.1.0.79-1739218474865:blk_1073741825_1001, duration(ns): 512684
20:14:35.321 INFO DataNode - PacketResponder: BP-1154891655-10.1.0.79-1739218474865:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating
20:14:35.322 INFO FSNamesystem - BLOCK* blk_1073741825_1001 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /user/runner/hdfs.cram
20:14:35.723 INFO StateChange - DIR* completeFile: /user/runner/hdfs.cram is closed by DFSClient_NONMAPREDUCE_-569958767_1
20:14:35.723 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
20:14:35.724 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/user/runner/hdfs.fasta dst=null perm=runner:supergroup:rw-r--r-- proto=rpc
20:14:35.725 INFO StateChange - BLOCK* allocate blk_1073741826_1002, replicas=127.0.0.1:36287 for /user/runner/hdfs.fasta
20:14:35.726 INFO DataNode - Receiving BP-1154891655-10.1.0.79-1739218474865:blk_1073741826_1002 src: /127.0.0.1:44028 dest: /127.0.0.1:36287
20:14:35.729 INFO clienttrace - src: /127.0.0.1:44028, dest: /127.0.0.1:36287, bytes: 1016671, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-569958767_1, offset: 0, srvID: c9366349-e487-4106-97fc-5a72bfa5e390, blockid: BP-1154891655-10.1.0.79-1739218474865:blk_1073741826_1002, duration(ns): 2478172
20:14:35.729 INFO DataNode - PacketResponder: BP-1154891655-10.1.0.79-1739218474865:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating
20:14:35.730 INFO StateChange - DIR* completeFile: /user/runner/hdfs.fasta is closed by DFSClient_NONMAPREDUCE_-569958767_1
20:14:35.730 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta.fai dst=null perm=null proto=rpc
20:14:35.731 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/user/runner/hdfs.fasta.fai dst=null perm=runner:supergroup:rw-r--r-- proto=rpc
20:14:35.732 INFO StateChange - BLOCK* allocate blk_1073741827_1003, replicas=127.0.0.1:36287 for /user/runner/hdfs.fasta.fai
20:14:35.732 INFO DataNode - Receiving BP-1154891655-10.1.0.79-1739218474865:blk_1073741827_1003 src: /127.0.0.1:44044 dest: /127.0.0.1:36287
20:14:35.733 INFO clienttrace - src: /127.0.0.1:44044, dest: /127.0.0.1:36287, bytes: 19, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-569958767_1, offset: 0, srvID: c9366349-e487-4106-97fc-5a72bfa5e390, blockid: BP-1154891655-10.1.0.79-1739218474865:blk_1073741827_1003, duration(ns): 379017
20:14:35.733 INFO DataNode - PacketResponder: BP-1154891655-10.1.0.79-1739218474865:blk_1073741827_1003, type=LAST_IN_PIPELINE terminating
20:14:35.734 INFO StateChange - DIR* completeFile: /user/runner/hdfs.fasta.fai is closed by DFSClient_NONMAPREDUCE_-569958767_1
20:14:35.734 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
20:14:35.735 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.735 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.735 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.736 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.736 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram.crai dst=null perm=null proto=rpc
20:14:35.737 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.crai dst=null perm=null proto=rpc
20:14:35.737 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
20:14:35.737 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
20:14:35.738 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta.fai dst=null perm=null proto=rpc
20:14:35.738 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.fasta.fai dst=null perm=null proto=rpc
20:14:35.739 WARN DFSUtil - Unexpected value for data transfer bytes=23 duration=0
20:14:35.741 WARN DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
20:14:35.741 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.741 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.742 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram.crai dst=null perm=null proto=rpc
20:14:35.742 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.crai dst=null perm=null proto=rpc
20:14:35.742 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.743 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.744 WARN DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
20:14:35.744 INFO MemoryStore - Block broadcast_587 stored as values in memory (estimated size 472.0 B, free 1920.0 MiB)
20:14:35.744 INFO MemoryStore - Block broadcast_587_piece0 stored as bytes in memory (estimated size 129.0 B, free 1920.0 MiB)
20:14:35.744 INFO BlockManagerInfo - Added broadcast_587_piece0 in memory on localhost:35739 (size: 129.0 B, free: 1920.0 MiB)
20:14:35.745 INFO SparkContext - Created broadcast 587 from broadcast at CramSource.java:114
20:14:35.746 INFO MemoryStore - Block broadcast_588 stored as values in memory (estimated size 297.9 KiB, free 1919.7 MiB)
20:14:35.752 INFO MemoryStore - Block broadcast_588_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
20:14:35.752 INFO BlockManagerInfo - Added broadcast_588_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1920.0 MiB)
20:14:35.752 INFO SparkContext - Created broadcast 588 from newAPIHadoopFile at PathSplitSource.java:96
20:14:35.767 INFO MemoryStore - Block broadcast_589 stored as values in memory (estimated size 536.0 B, free 1919.7 MiB)
20:14:35.767 INFO MemoryStore - Block broadcast_589_piece0 stored as bytes in memory (estimated size 187.0 B, free 1919.7 MiB)
20:14:35.767 INFO BlockManagerInfo - Added broadcast_589_piece0 in memory on localhost:35739 (size: 187.0 B, free: 1920.0 MiB)
20:14:35.767 INFO SparkContext - Created broadcast 589 from broadcast at CramSource.java:114
20:14:35.768 INFO MemoryStore - Block broadcast_590 stored as values in memory (estimated size 297.9 KiB, free 1919.4 MiB)
20:14:35.774 INFO MemoryStore - Block broadcast_590_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.3 MiB)
20:14:35.774 INFO BlockManagerInfo - Added broadcast_590_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.9 MiB)
20:14:35.774 INFO SparkContext - Created broadcast 590 from newAPIHadoopFile at PathSplitSource.java:96
20:14:35.788 INFO FileInputFormat - Total input files to process : 1
20:14:35.813 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:244
20:14:35.813 INFO DAGScheduler - Got job 221 (collect at ReadsSparkSourceUnitTest.java:244) with 1 output partitions
20:14:35.813 INFO DAGScheduler - Final stage: ResultStage 285 (collect at ReadsSparkSourceUnitTest.java:244)
20:14:35.813 INFO DAGScheduler - Parents of final stage: List()
20:14:35.813 INFO DAGScheduler - Missing parents: List()
20:14:35.813 INFO DAGScheduler - Submitting ResultStage 285 (MapPartitionsRDD[1405] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:35.825 INFO MemoryStore - Block broadcast_591 stored as values in memory (estimated size 286.8 KiB, free 1919.0 MiB)
20:14:35.825 INFO MemoryStore - Block broadcast_591_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.9 MiB)
20:14:35.826 INFO BlockManagerInfo - Added broadcast_591_piece0 in memory on localhost:35739 (size: 103.6 KiB, free: 1919.8 MiB)
20:14:35.826 INFO SparkContext - Created broadcast 591 from broadcast at DAGScheduler.scala:1580
20:14:35.826 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 285 (MapPartitionsRDD[1405] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:35.826 INFO TaskSchedulerImpl - Adding task set 285.0 with 1 tasks resource profile 0
20:14:35.826 INFO TaskSetManager - Starting task 0.0 in stage 285.0 (TID 372) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes)
20:14:35.827 INFO Executor - Running task 0.0 in stage 285.0 (TID 372)
20:14:35.847 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
20:14:35.854 INFO Executor - Finished task 0.0 in stage 285.0 (TID 372). 154101 bytes result sent to driver
20:14:35.855 INFO TaskSetManager - Finished task 0.0 in stage 285.0 (TID 372) in 29 ms on localhost (executor driver) (1/1)
20:14:35.855 INFO TaskSchedulerImpl - Removed TaskSet 285.0, whose tasks have all completed, from pool
20:14:35.855 INFO DAGScheduler - ResultStage 285 (collect at ReadsSparkSourceUnitTest.java:244) finished in 0.041 s
20:14:35.855 INFO DAGScheduler - Job 221 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:35.855 INFO TaskSchedulerImpl - Killing all running tasks in stage 285: Stage finished
20:14:35.855 INFO DAGScheduler - Job 221 finished: collect at ReadsSparkSourceUnitTest.java:244, took 0.042310 s
20:14:35.856 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
20:14:35.857 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.857 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.857 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.858 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.858 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram.crai dst=null perm=null proto=rpc
20:14:35.859 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.crai dst=null perm=null proto=rpc
20:14:35.859 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
20:14:35.859 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
20:14:35.860 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta.fai dst=null perm=null proto=rpc
20:14:35.860 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.fasta.fai dst=null perm=null proto=rpc
20:14:35.861 WARN DFSUtil - Unexpected value for data transfer bytes=23 duration=0
20:14:35.862 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.863 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.863 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram.crai dst=null perm=null proto=rpc
20:14:35.863 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.crai dst=null perm=null proto=rpc
20:14:35.863 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.864 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.864 WARN DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
20:14:35.865 INFO MemoryStore - Block broadcast_592 stored as values in memory (estimated size 472.0 B, free 1918.9 MiB)
20:14:35.865 INFO MemoryStore - Block broadcast_592_piece0 stored as bytes in memory (estimated size 129.0 B, free 1918.9 MiB)
20:14:35.865 INFO BlockManagerInfo - Added broadcast_592_piece0 in memory on localhost:35739 (size: 129.0 B, free: 1919.8 MiB)
20:14:35.865 INFO SparkContext - Created broadcast 592 from broadcast at CramSource.java:114
20:14:35.866 INFO MemoryStore - Block broadcast_593 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
20:14:35.872 INFO MemoryStore - Block broadcast_593_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
20:14:35.872 INFO BlockManagerInfo - Added broadcast_593_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.8 MiB)
20:14:35.873 INFO SparkContext - Created broadcast 593 from newAPIHadoopFile at PathSplitSource.java:96
20:14:35.886 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.887 INFO FileInputFormat - Total input files to process : 1
20:14:35.887 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.912 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:245
20:14:35.912 INFO DAGScheduler - Got job 222 (collect at ReadsSparkSourceUnitTest.java:245) with 1 output partitions
20:14:35.912 INFO DAGScheduler - Final stage: ResultStage 286 (collect at ReadsSparkSourceUnitTest.java:245)
20:14:35.912 INFO DAGScheduler - Parents of final stage: List()
20:14:35.912 INFO DAGScheduler - Missing parents: List()
20:14:35.912 INFO DAGScheduler - Submitting ResultStage 286 (MapPartitionsRDD[1411] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:35.923 INFO MemoryStore - Block broadcast_594 stored as values in memory (estimated size 286.8 KiB, free 1918.3 MiB)
20:14:35.924 INFO MemoryStore - Block broadcast_594_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.2 MiB)
20:14:35.924 INFO BlockManagerInfo - Added broadcast_594_piece0 in memory on localhost:35739 (size: 103.6 KiB, free: 1919.7 MiB)
20:14:35.924 INFO SparkContext - Created broadcast 594 from broadcast at DAGScheduler.scala:1580
20:14:35.924 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 286 (MapPartitionsRDD[1411] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:35.924 INFO TaskSchedulerImpl - Adding task set 286.0 with 1 tasks resource profile 0
20:14:35.925 INFO TaskSetManager - Starting task 0.0 in stage 286.0 (TID 373) (localhost, executor driver, partition 0, ANY, 7797 bytes)
20:14:35.925 INFO Executor - Running task 0.0 in stage 286.0 (TID 373)
20:14:35.944 INFO NewHadoopRDD - Input split: hdfs://localhost:41253/user/runner/hdfs.cram:0+50619
20:14:35.945 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.945 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
20:14:35.946 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram.crai dst=null perm=null proto=rpc
20:14:35.946 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.crai dst=null perm=null proto=rpc
20:14:35.946 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
20:14:35.947 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
20:14:35.947 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta.fai dst=null perm=null proto=rpc
20:14:35.948 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.fasta.fai dst=null perm=null proto=rpc
20:14:35.949 WARN DFSUtil - Unexpected value for data transfer bytes=23 duration=0
20:14:35.950 WARN DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
20:14:35.959 WARN DFSUtil - Unexpected value for data transfer bytes=232039 duration=0
20:14:35.959 WARN DFSUtil - Unexpected value for data transfer bytes=99943 duration=0
20:14:35.962 INFO Executor - Finished task 0.0 in stage 286.0 (TID 373). 154101 bytes result sent to driver
20:14:35.963 INFO TaskSetManager - Finished task 0.0 in stage 286.0 (TID 373) in 38 ms on localhost (executor driver) (1/1)
20:14:35.963 INFO TaskSchedulerImpl - Removed TaskSet 286.0, whose tasks have all completed, from pool
20:14:35.964 INFO DAGScheduler - ResultStage 286 (collect at ReadsSparkSourceUnitTest.java:245) finished in 0.052 s
20:14:35.964 INFO DAGScheduler - Job 222 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:35.964 INFO TaskSchedulerImpl - Killing all running tasks in stage 286: Stage finished
20:14:35.964 INFO DAGScheduler - Job 222 finished: collect at ReadsSparkSourceUnitTest.java:245, took 0.052092 s
20:14:35.967 INFO MiniDFSCluster - Shutting down the Mini HDFS Cluster
20:14:35.967 INFO MiniDFSCluster - Shutting down DataNode 0
20:14:35.967 INFO DirectoryScanner - Shutdown has been called
20:14:35.967 INFO DataNode - Closing all peers.
20:14:35.968 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage12869977849631780213/data/data2, DS-1298e5ca-89b6-43fd-b593-28ee71074b80) exiting.
20:14:35.968 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage12869977849631780213/data/data1, DS-524ca767-0408-4ee4-8a45-16a3df81ab63) exiting.
20:14:35.970 INFO ContextHandler - Stopped o.e.j.w.WebAppContext@22915312{datanode,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
20:14:35.971 INFO AbstractConnector - Stopped ServerConnector@48417074{HTTP/1.1, (http/1.1)}{localhost:0}
20:14:35.971 INFO session - node0 Stopped scavenging
20:14:35.971 INFO ContextHandler - Stopped o.e.j.s.ServletContextHandler@76ff7be0{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
20:14:35.972 INFO DataNode - Waiting up to 30 seconds for transfer threads to complete
20:14:35.972 INFO Server - Stopping server on 35445
20:14:35.972 INFO Server - Stopping IPC Server listener on 0
20:14:35.973 INFO Server - Stopping IPC Server Responder
20:14:35.973 WARN IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
20:14:35.973 ERROR DataNode - Command processor encountered interrupt and exit.
20:14:35.973 WARN DataNode - Ending block pool service for: Block pool BP-1154891655-10.1.0.79-1739218474865 (Datanode Uuid c9366349-e487-4106-97fc-5a72bfa5e390) service to localhost/127.0.0.1:41253
20:14:35.973 WARN DataNode - Ending command processor service for: Thread[Command processor,5,main]
20:14:35.973 INFO DataNode - Removed Block pool BP-1154891655-10.1.0.79-1739218474865 (Datanode Uuid c9366349-e487-4106-97fc-5a72bfa5e390)
20:14:35.973 INFO FsDatasetImpl - Removing block pool BP-1154891655-10.1.0.79-1739218474865
20:14:35.973 WARN IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
20:14:35.973 WARN CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
20:14:35.974 WARN CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
20:14:35.974 INFO FsDatasetAsyncDiskService - Shutting down all async disk service threads
20:14:35.974 INFO FsDatasetAsyncDiskService - All async disk service threads have been shut down
20:14:35.974 INFO RamDiskAsyncLazyPersistService - Shutting down all async lazy persist service threads
20:14:35.974 INFO RamDiskAsyncLazyPersistService - All async lazy persist service threads have been shut down
20:14:35.974 INFO DataNode - Shutdown complete.
20:14:35.974 INFO MiniDFSCluster - Shutting down the namenode
20:14:35.974 INFO FSNamesystem - Stopping services started for active state
20:14:35.974 INFO FSEditLog - Ending log segment 1, 18
20:14:35.974 INFO FSNamesystem - NameNodeEditLogRoller was interrupted, exiting
20:14:35.974 INFO FSNamesystem - LazyPersistFileScrubber was interrupted, exiting
20:14:35.974 INFO FSEditLog - Number of transactions: 19 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 5 Number of syncs: 15 SyncTimes(ms): 1 1
20:14:35.975 INFO FileJournalManager - Finalizing edits file /tmp/minicluster_storage12869977849631780213/name-0-1/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage12869977849631780213/name-0-1/current/edits_0000000000000000001-0000000000000000019
20:14:35.975 INFO FileJournalManager - Finalizing edits file /tmp/minicluster_storage12869977849631780213/name-0-2/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage12869977849631780213/name-0-2/current/edits_0000000000000000001-0000000000000000019
20:14:35.975 INFO FSEditLog - FSEditLogAsync was interrupted, exiting
20:14:35.975 INFO CacheReplicationMonitor - Shutting down CacheReplicationMonitor
20:14:35.975 INFO Server - Stopping server on 41253
20:14:35.976 INFO Server - Stopping IPC Server listener on 0
20:14:35.976 INFO Server - Stopping IPC Server Responder
20:14:35.976 INFO BlockManager - Stopping RedundancyMonitor.
20:14:35.976 INFO BlockManager - Stopping MarkedDeleteBlockScrubber.
20:14:35.980 INFO FSNamesystem - Stopping services started for active state
20:14:35.980 INFO FSNamesystem - Stopping services started for standby state
20:14:35.981 INFO ContextHandler - Stopped o.e.j.w.WebAppContext@392005e7{hdfs,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
20:14:35.981 INFO AbstractConnector - Stopped ServerConnector@7bfc64c2{HTTP/1.1, (http/1.1)}{localhost:0}
20:14:35.981 INFO session - node0 Stopped scavenging
20:14:35.981 INFO ContextHandler - Stopped o.e.j.s.ServletContextHandler@673c937b{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
20:14:35.982 INFO MetricsSystemImpl - Stopping DataNode metrics system...
20:14:35.982 INFO MetricsSystemImpl - DataNode metrics system stopped.
20:14:35.983 INFO MetricsSystemImpl - DataNode metrics system shutdown complete.
20:14:35.992 INFO MemoryStore - Block broadcast_595 stored as values in memory (estimated size 297.9 KiB, free 1917.9 MiB)
20:14:36.002 INFO MemoryStore - Block broadcast_595_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.9 MiB)
20:14:36.006 INFO BlockManagerInfo - Added broadcast_595_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.6 MiB)
20:14:36.006 INFO SparkContext - Created broadcast 595 from newAPIHadoopFile at PathSplitSource.java:96
20:14:36.007 INFO BlockManagerInfo - Removed broadcast_593_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.7 MiB)
20:14:36.007 INFO BlockManagerInfo - Removed broadcast_591_piece0 on localhost:35739 in memory (size: 103.6 KiB, free: 1919.8 MiB)
20:14:36.008 INFO BlockManagerInfo - Removed broadcast_587_piece0 on localhost:35739 in memory (size: 129.0 B, free: 1919.8 MiB)
20:14:36.008 INFO BlockManagerInfo - Removed broadcast_589_piece0 on localhost:35739 in memory (size: 187.0 B, free: 1919.8 MiB)
20:14:36.009 INFO BlockManagerInfo - Removed broadcast_590_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.8 MiB)
20:14:36.009 INFO BlockManagerInfo - Removed broadcast_594_piece0 on localhost:35739 in memory (size: 103.6 KiB, free: 1919.9 MiB)
20:14:36.009 INFO BlockManagerInfo - Removed broadcast_592_piece0 on localhost:35739 in memory (size: 129.0 B, free: 1919.9 MiB)
20:14:36.010 INFO BlockManagerInfo - Removed broadcast_588_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1920.0 MiB)
20:14:36.044 INFO MemoryStore - Block broadcast_596 stored as values in memory (estimated size 298.0 KiB, free 1919.4 MiB)
20:14:36.051 INFO MemoryStore - Block broadcast_596_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.3 MiB)
20:14:36.051 INFO BlockManagerInfo - Added broadcast_596_piece0 in memory on localhost:35739 (size: 50.3 KiB, free: 1919.9 MiB)
20:14:36.051 INFO SparkContext - Created broadcast 596 from newAPIHadoopFile at PathSplitSource.java:96
20:14:36.078 INFO BlockManagerInfo - Removed broadcast_595_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1920.0 MiB)
20:14:36.086 INFO MemoryStore - Block broadcast_597 stored as values in memory (estimated size 298.0 KiB, free 1919.4 MiB)
20:14:36.095 INFO MemoryStore - Block broadcast_597_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.3 MiB)
20:14:36.095 INFO BlockManagerInfo - Added broadcast_597_piece0 in memory on localhost:35739 (size: 50.3 KiB, free: 1919.9 MiB)
20:14:36.095 INFO SparkContext - Created broadcast 597 from newAPIHadoopFile at PathSplitSource.java:96
20:14:36.116 INFO MemoryStore - Block broadcast_598 stored as values in memory (estimated size 536.0 B, free 1919.3 MiB)
20:14:36.116 INFO MemoryStore - Block broadcast_598_piece0 stored as bytes in memory (estimated size 187.0 B, free 1919.3 MiB)
20:14:36.117 INFO BlockManagerInfo - Added broadcast_598_piece0 in memory on localhost:35739 (size: 187.0 B, free: 1919.9 MiB)
20:14:36.117 INFO SparkContext - Created broadcast 598 from broadcast at CramSource.java:114
20:14:36.117 INFO MemoryStore - Block broadcast_599 stored as values in memory (estimated size 297.9 KiB, free 1919.0 MiB)
20:14:36.124 INFO MemoryStore - Block broadcast_599_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.0 MiB)
20:14:36.124 INFO BlockManagerInfo - Added broadcast_599_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.9 MiB)
20:14:36.124 INFO SparkContext - Created broadcast 599 from newAPIHadoopFile at PathSplitSource.java:96
20:14:36.138 INFO MemoryStore - Block broadcast_600 stored as values in memory (estimated size 608.0 B, free 1919.0 MiB)
20:14:36.139 INFO MemoryStore - Block broadcast_600_piece0 stored as bytes in memory (estimated size 206.0 B, free 1919.0 MiB)
20:14:36.139 INFO BlockManagerInfo - Added broadcast_600_piece0 in memory on localhost:35739 (size: 206.0 B, free: 1919.9 MiB)
20:14:36.139 INFO SparkContext - Created broadcast 600 from broadcast at CramSource.java:114
20:14:36.140 INFO MemoryStore - Block broadcast_601 stored as values in memory (estimated size 297.9 KiB, free 1918.7 MiB)
20:14:36.146 INFO MemoryStore - Block broadcast_601_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
20:14:36.146 INFO BlockManagerInfo - Added broadcast_601_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.8 MiB)
20:14:36.146 INFO SparkContext - Created broadcast 601 from newAPIHadoopFile at PathSplitSource.java:96
20:14:36.161 INFO MemoryStore - Block broadcast_602 stored as values in memory (estimated size 298.0 KiB, free 1918.3 MiB)
20:14:36.168 INFO MemoryStore - Block broadcast_602_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.3 MiB)
20:14:36.168 INFO BlockManagerInfo - Added broadcast_602_piece0 in memory on localhost:35739 (size: 50.3 KiB, free: 1919.8 MiB)
20:14:36.168 INFO SparkContext - Created broadcast 602 from newAPIHadoopFile at PathSplitSource.java:96
20:14:36.189 INFO FileInputFormat - Total input files to process : 1
20:14:36.227 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:163
20:14:36.227 INFO DAGScheduler - Got job 223 (collect at ReadsSparkSourceUnitTest.java:163) with 1 output partitions
20:14:36.227 INFO DAGScheduler - Final stage: ResultStage 287 (collect at ReadsSparkSourceUnitTest.java:163)
20:14:36.227 INFO DAGScheduler - Parents of final stage: List()
20:14:36.227 INFO DAGScheduler - Missing parents: List()
20:14:36.227 INFO DAGScheduler - Submitting ResultStage 287 (MapPartitionsRDD[1441] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:36.244 INFO MemoryStore - Block broadcast_603 stored as values in memory (estimated size 426.2 KiB, free 1917.9 MiB)
20:14:36.245 INFO MemoryStore - Block broadcast_603_piece0 stored as bytes in memory (estimated size 153.6 KiB, free 1917.7 MiB)
20:14:36.245 INFO BlockManagerInfo - Added broadcast_603_piece0 in memory on localhost:35739 (size: 153.6 KiB, free: 1919.6 MiB)
20:14:36.245 INFO SparkContext - Created broadcast 603 from broadcast at DAGScheduler.scala:1580
20:14:36.245 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 287 (MapPartitionsRDD[1441] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:36.245 INFO TaskSchedulerImpl - Adding task set 287.0 with 1 tasks resource profile 0
20:14:36.246 INFO TaskSetManager - Starting task 0.0 in stage 287.0 (TID 374) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7883 bytes)
20:14:36.246 INFO Executor - Running task 0.0 in stage 287.0 (TID 374)
20:14:36.284 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam:0+221635
20:14:36.296 INFO Executor - Finished task 0.0 in stage 287.0 (TID 374). 650141 bytes result sent to driver
20:14:36.297 INFO TaskSetManager - Finished task 0.0 in stage 287.0 (TID 374) in 51 ms on localhost (executor driver) (1/1)
20:14:36.298 INFO TaskSchedulerImpl - Removed TaskSet 287.0, whose tasks have all completed, from pool
20:14:36.298 INFO DAGScheduler - ResultStage 287 (collect at ReadsSparkSourceUnitTest.java:163) finished in 0.070 s
20:14:36.298 INFO DAGScheduler - Job 223 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:36.298 INFO TaskSchedulerImpl - Killing all running tasks in stage 287: Stage finished
20:14:36.298 INFO DAGScheduler - Job 223 finished: collect at ReadsSparkSourceUnitTest.java:163, took 0.070887 s
20:14:36.305 INFO MemoryStore - Block broadcast_604 stored as values in memory (estimated size 208.0 B, free 1917.7 MiB)
20:14:36.307 INFO MemoryStore - Block broadcast_604_piece0 stored as bytes in memory (estimated size 222.0 B, free 1917.7 MiB)
20:14:36.307 INFO BlockManagerInfo - Added broadcast_604_piece0 in memory on localhost:35739 (size: 222.0 B, free: 1919.6 MiB)
20:14:36.307 INFO SparkContext - Created broadcast 604 from broadcast at AbstractBinarySamSource.java:82
20:14:36.308 INFO MemoryStore - Block broadcast_605 stored as values in memory (estimated size 297.9 KiB, free 1917.4 MiB)
20:14:36.316 INFO MemoryStore - Block broadcast_605_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.4 MiB)
20:14:36.316 INFO BlockManagerInfo - Added broadcast_605_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.6 MiB)
20:14:36.316 INFO SparkContext - Created broadcast 605 from newAPIHadoopFile at PathSplitSource.java:96
20:14:36.337 INFO FileInputFormat - Total input files to process : 1
20:14:36.340 INFO SparkContext - Starting job: count at ReadsSparkSourceUnitTest.java:265
20:14:36.340 INFO DAGScheduler - Got job 224 (count at ReadsSparkSourceUnitTest.java:265) with 1 output partitions
20:14:36.340 INFO DAGScheduler - Final stage: ResultStage 288 (count at ReadsSparkSourceUnitTest.java:265)
20:14:36.340 INFO DAGScheduler - Parents of final stage: List()
20:14:36.341 INFO DAGScheduler - Missing parents: List()
20:14:36.341 INFO DAGScheduler - Submitting ResultStage 288 (MapPartitionsRDD[1448] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:36.357 INFO MemoryStore - Block broadcast_606 stored as values in memory (estimated size 426.2 KiB, free 1917.0 MiB)
20:14:36.358 INFO MemoryStore - Block broadcast_606_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1916.8 MiB)
20:14:36.358 INFO BlockManagerInfo - Added broadcast_606_piece0 in memory on localhost:35739 (size: 153.7 KiB, free: 1919.4 MiB)
20:14:36.358 INFO SparkContext - Created broadcast 606 from broadcast at DAGScheduler.scala:1580
20:14:36.358 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 288 (MapPartitionsRDD[1448] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:36.359 INFO TaskSchedulerImpl - Adding task set 288.0 with 1 tasks resource profile 0
20:14:36.359 INFO TaskSetManager - Starting task 0.0 in stage 288.0 (TID 375) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes)
20:14:36.359 INFO Executor - Running task 0.0 in stage 288.0 (TID 375)
20:14:36.387 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
20:14:36.395 INFO Executor - Finished task 0.0 in stage 288.0 (TID 375). 1075 bytes result sent to driver
20:14:36.395 INFO BlockManagerInfo - Removed broadcast_599_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.5 MiB)
20:14:36.395 INFO TaskSetManager - Finished task 0.0 in stage 288.0 (TID 375) in 36 ms on localhost (executor driver) (1/1)
20:14:36.395 INFO TaskSchedulerImpl - Removed TaskSet 288.0, whose tasks have all completed, from pool
20:14:36.395 INFO DAGScheduler - ResultStage 288 (count at ReadsSparkSourceUnitTest.java:265) finished in 0.054 s
20:14:36.395 INFO DAGScheduler - Job 224 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:36.395 INFO BlockManagerInfo - Removed broadcast_596_piece0 on localhost:35739 in memory (size: 50.3 KiB, free: 1919.5 MiB)
20:14:36.395 INFO TaskSchedulerImpl - Killing all running tasks in stage 288: Stage finished
20:14:36.395 INFO DAGScheduler - Job 224 finished: count at ReadsSparkSourceUnitTest.java:265, took 0.055114 s
20:14:36.396 INFO BlockManagerInfo - Removed broadcast_600_piece0 on localhost:35739 in memory (size: 206.0 B, free: 1919.5 MiB)
20:14:36.396 INFO BlockManagerInfo - Removed broadcast_602_piece0 on localhost:35739 in memory (size: 50.3 KiB, free: 1919.6 MiB)
20:14:36.397 INFO BlockManagerInfo - Removed broadcast_601_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.6 MiB)
20:14:36.397 INFO BlockManagerInfo - Removed broadcast_603_piece0 on localhost:35739 in memory (size: 153.6 KiB, free: 1919.8 MiB)
20:14:36.397 INFO BlockManagerInfo - Removed broadcast_597_piece0 on localhost:35739 in memory (size: 50.3 KiB, free: 1919.8 MiB)
20:14:36.398 INFO BlockManagerInfo - Removed broadcast_598_piece0 on localhost:35739 in memory (size: 187.0 B, free: 1919.8 MiB)
20:14:36.398 INFO MemoryStore - Block broadcast_607 stored as values in memory (estimated size 160.0 B, free 1919.1 MiB)
20:14:36.398 INFO MemoryStore - Block broadcast_607_piece0 stored as bytes in memory (estimated size 212.0 B, free 1919.1 MiB)
20:14:36.398 INFO BlockManagerInfo - Added broadcast_607_piece0 in memory on localhost:35739 (size: 212.0 B, free: 1919.8 MiB)
20:14:36.399 INFO SparkContext - Created broadcast 607 from broadcast at AbstractBinarySamSource.java:82
20:14:36.399 INFO MemoryStore - Block broadcast_608 stored as values in memory (estimated size 298.0 KiB, free 1918.8 MiB)
20:14:36.405 INFO MemoryStore - Block broadcast_608_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.8 MiB)
20:14:36.406 INFO BlockManagerInfo - Added broadcast_608_piece0 in memory on localhost:35739 (size: 50.3 KiB, free: 1919.8 MiB)
20:14:36.406 INFO SparkContext - Created broadcast 608 from newAPIHadoopFile at PathSplitSource.java:96
20:14:36.426 INFO FileInputFormat - Total input files to process : 1
20:14:36.429 INFO SparkContext - Starting job: count at ReadsSparkSourceUnitTest.java:286
20:14:36.429 INFO DAGScheduler - Got job 225 (count at ReadsSparkSourceUnitTest.java:286) with 1 output partitions
20:14:36.429 INFO DAGScheduler - Final stage: ResultStage 289 (count at ReadsSparkSourceUnitTest.java:286)
20:14:36.429 INFO DAGScheduler - Parents of final stage: List()
20:14:36.429 INFO DAGScheduler - Missing parents: List()
20:14:36.429 INFO DAGScheduler - Submitting ResultStage 289 (MapPartitionsRDD[1455] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:36.446 INFO MemoryStore - Block broadcast_609 stored as values in memory (estimated size 426.2 KiB, free 1918.3 MiB)
20:14:36.447 INFO MemoryStore - Block broadcast_609_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1918.2 MiB)
20:14:36.447 INFO BlockManagerInfo - Added broadcast_609_piece0 in memory on localhost:35739 (size: 153.7 KiB, free: 1919.6 MiB)
20:14:36.447 INFO SparkContext - Created broadcast 609 from broadcast at DAGScheduler.scala:1580
20:14:36.447 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 289 (MapPartitionsRDD[1455] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:36.447 INFO TaskSchedulerImpl - Adding task set 289.0 with 1 tasks resource profile 0
20:14:36.448 INFO TaskSetManager - Starting task 0.0 in stage 289.0 (TID 376) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7901 bytes)
20:14:36.448 INFO Executor - Running task 0.0 in stage 289.0 (TID 376)
20:14:36.477 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/engine/CEUTrio.HiSeq.WGS.b37.NA12878.snippet_with_unmapped.bam:0+2184
20:14:36.481 INFO Executor - Finished task 0.0 in stage 289.0 (TID 376). 989 bytes result sent to driver
20:14:36.481 INFO TaskSetManager - Finished task 0.0 in stage 289.0 (TID 376) in 33 ms on localhost (executor driver) (1/1)
20:14:36.481 INFO TaskSchedulerImpl - Removed TaskSet 289.0, whose tasks have all completed, from pool
20:14:36.481 INFO DAGScheduler - ResultStage 289 (count at ReadsSparkSourceUnitTest.java:286) finished in 0.052 s
20:14:36.482 INFO DAGScheduler - Job 225 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:36.482 INFO TaskSchedulerImpl - Killing all running tasks in stage 289: Stage finished
20:14:36.482 INFO DAGScheduler - Job 225 finished: count at ReadsSparkSourceUnitTest.java:286, took 0.052659 s
20:14:36.484 INFO MemoryStore - Block broadcast_610 stored as values in memory (estimated size 298.0 KiB, free 1917.9 MiB)
20:14:36.490 INFO MemoryStore - Block broadcast_610_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.8 MiB)
20:14:36.490 INFO BlockManagerInfo - Added broadcast_610_piece0 in memory on localhost:35739 (size: 50.3 KiB, free: 1919.6 MiB)
20:14:36.491 INFO SparkContext - Created broadcast 610 from newAPIHadoopFile at PathSplitSource.java:96
20:14:36.512 INFO MemoryStore - Block broadcast_611 stored as values in memory (estimated size 298.2 KiB, free 1917.6 MiB)
20:14:36.518 INFO MemoryStore - Block broadcast_611_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.5 MiB)
20:14:36.518 INFO BlockManagerInfo - Added broadcast_611_piece0 in memory on localhost:35739 (size: 50.3 KiB, free: 1919.5 MiB)
20:14:36.518 INFO SparkContext - Created broadcast 611 from newAPIHadoopFile at PathSplitSource.java:96
20:14:36.537 INFO FileInputFormat - Total input files to process : 1
20:14:36.538 INFO FileInputFormat - Total input files to process : 1
20:14:36.547 INFO MiniDFSCluster - starting cluster: numNameNodes=1, numDataNodes=1
20:14:36.548 INFO NameNode - Formatting using clusterid: testClusterID
20:14:36.548 INFO FSEditLog - Edit logging is async:true
20:14:36.554 INFO FSNamesystem - KeyProvider: null
20:14:36.554 INFO FSNamesystem - fsLock is fair: true
20:14:36.554 INFO FSNamesystem - Detailed lock hold time metrics enabled: false
20:14:36.554 INFO FSNamesystem - fsOwner = runner (auth:SIMPLE)
20:14:36.554 INFO FSNamesystem - supergroup = supergroup
20:14:36.554 INFO FSNamesystem - isPermissionEnabled = true
20:14:36.554 INFO FSNamesystem - isStoragePolicyEnabled = true
20:14:36.554 INFO FSNamesystem - HA Enabled: false
20:14:36.555 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:14:36.555 INFO DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
20:14:36.555 INFO DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
20:14:36.555 INFO BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
20:14:36.555 INFO BlockManager - The block deletion will start around 2025 Feb 10 20:14:36
20:14:36.555 INFO GSet - Computing capacity for map BlocksMap
20:14:36.555 INFO GSet - VM type = 64-bit
20:14:36.555 INFO GSet - 2.0% max memory 3.4 GB = 70 MB
20:14:36.555 INFO GSet - capacity = 2^23 = 8388608 entries
20:14:36.558 INFO BlockManagerInfo - Removed broadcast_611_piece0 on localhost:35739 in memory (size: 50.3 KiB, free: 1919.6 MiB)
20:14:36.559 INFO BlockManagerInfo - Removed broadcast_607_piece0 on localhost:35739 in memory (size: 212.0 B, free: 1919.6 MiB)
20:14:36.559 INFO BlockManagerInfo - Removed broadcast_608_piece0 on localhost:35739 in memory (size: 50.3 KiB, free: 1919.6 MiB)
20:14:36.559 INFO BlockManagerInfo - Removed broadcast_606_piece0 on localhost:35739 in memory (size: 153.7 KiB, free: 1919.8 MiB)
20:14:36.560 INFO BlockManager - Storage policy satisfier is disabled
20:14:36.560 INFO BlockManager - dfs.block.access.token.enable = false
20:14:36.560 INFO BlockManagerInfo - Removed broadcast_605_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.8 MiB)
20:14:36.560 INFO BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
20:14:36.560 INFO BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
20:14:36.560 INFO BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
20:14:36.560 INFO BlockManager - defaultReplication = 1
20:14:36.560 INFO BlockManager - maxReplication = 512
20:14:36.560 INFO BlockManager - minReplication = 1
20:14:36.560 INFO BlockManager - maxReplicationStreams = 2
20:14:36.560 INFO BlockManager - redundancyRecheckInterval = 3000ms
20:14:36.560 INFO BlockManager - encryptDataTransfer = false
20:14:36.560 INFO BlockManager - maxNumBlocksToLog = 1000
20:14:36.560 INFO BlockManagerInfo - Removed broadcast_610_piece0 on localhost:35739 in memory (size: 50.3 KiB, free: 1919.8 MiB)
20:14:36.560 INFO GSet - Computing capacity for map INodeMap
20:14:36.560 INFO GSet - VM type = 64-bit
20:14:36.560 INFO GSet - 1.0% max memory 3.4 GB = 35 MB
20:14:36.560 INFO GSet - capacity = 2^22 = 4194304 entries
20:14:36.560 INFO BlockManagerInfo - Removed broadcast_609_piece0 on localhost:35739 in memory (size: 153.7 KiB, free: 1920.0 MiB)
20:14:36.561 INFO BlockManagerInfo - Removed broadcast_604_piece0 on localhost:35739 in memory (size: 222.0 B, free: 1920.0 MiB)
20:14:36.561 INFO FSDirectory - ACLs enabled? true
20:14:36.561 INFO FSDirectory - POSIX ACL inheritance enabled? true
20:14:36.561 INFO FSDirectory - XAttrs enabled? true
20:14:36.561 INFO NameNode - Caching file names occurring more than 10 times
20:14:36.561 INFO SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
20:14:36.561 INFO SnapshotManager - SkipList is disabled
20:14:36.561 INFO GSet - Computing capacity for map cachedBlocks
20:14:36.561 INFO GSet - VM type = 64-bit
20:14:36.561 INFO GSet - 0.25% max memory 3.4 GB = 8.8 MB
20:14:36.561 INFO GSet - capacity = 2^20 = 1048576 entries
20:14:36.562 INFO TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
20:14:36.562 INFO TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
20:14:36.562 INFO TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
20:14:36.562 INFO FSNamesystem - Retry cache on namenode is enabled
20:14:36.562 INFO FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
20:14:36.562 INFO GSet - Computing capacity for map NameNodeRetryCache
20:14:36.562 INFO GSet - VM type = 64-bit
20:14:36.562 INFO GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
20:14:36.562 INFO GSet - capacity = 2^17 = 131072 entries
20:14:36.562 INFO FSImage - Allocated new BlockPoolId: BP-736909785-10.1.0.79-1739218476562
20:14:36.564 INFO Storage - Storage directory /tmp/minicluster_storage15101762250106709887/name-0-1 has been successfully formatted.
20:14:36.565 INFO Storage - Storage directory /tmp/minicluster_storage15101762250106709887/name-0-2 has been successfully formatted.
20:14:36.575 INFO FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage15101762250106709887/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression
20:14:36.575 INFO FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage15101762250106709887/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression
20:14:36.578 INFO FSImageFormatProtobuf - Image file /tmp/minicluster_storage15101762250106709887/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
20:14:36.580 INFO FSImageFormatProtobuf - Image file /tmp/minicluster_storage15101762250106709887/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
20:14:36.581 INFO NNStorageRetentionManager - Going to retain 1 images with txid >= 0
20:14:36.584 INFO FSNamesystem - Stopping services started for active state
20:14:36.584 INFO FSNamesystem - Stopping services started for standby state
20:14:36.584 INFO NameNode - createNameNode []
20:14:36.585 WARN MetricsConfig - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
20:14:36.586 INFO MetricsSystemImpl - Scheduled Metric snapshot period at 10 second(s).
20:14:36.586 INFO MetricsSystemImpl - NameNode metrics system started
20:14:36.587 INFO NameNodeUtils - fs.defaultFS is hdfs://127.0.0.1:0
20:14:36.597 INFO JvmPauseMonitor - Starting JVM pause monitor
20:14:36.597 INFO DFSUtil - Filter initializers set : org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
20:14:36.597 INFO DFSUtil - Starting Web-server for hdfs at: http://localhost:0
20:14:36.598 WARN AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
20:14:36.598 WARN HttpRequestLog - Jetty request log can only be enabled using Log4j
20:14:36.599 INFO HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
20:14:36.599 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
20:14:36.599 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
20:14:36.600 INFO HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context hdfs
20:14:36.600 INFO HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context static
20:14:36.600 INFO HttpServer2 - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
20:14:36.600 INFO HttpServer2 - Jetty bound to port 40891
20:14:36.601 INFO Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
20:14:36.601 INFO session - DefaultSessionIdManager workerName=node0
20:14:36.601 INFO session - No SessionScavenger set, using defaults
20:14:36.601 INFO session - node0 Scavenging every 600000ms
20:14:36.602 WARN AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
20:14:36.602 INFO ContextHandler - Started o.e.j.s.ServletContextHandler@86a7fe8{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
20:14:36.687 INFO ContextHandler - Started o.e.j.w.WebAppContext@44bd2e87{hdfs,/,file:///tmp/jetty-localhost-40891-hadoop-hdfs-3_3_6-tests_jar-_-any-970474819103145895/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
20:14:36.688 INFO AbstractConnector - Started ServerConnector@5733e051{HTTP/1.1, (http/1.1)}{localhost:40891}
20:14:36.688 INFO Server - Started @81536ms
20:14:36.688 INFO FSEditLog - Edit logging is async:true
20:14:36.696 INFO FSNamesystem - KeyProvider: null
20:14:36.696 INFO FSNamesystem - fsLock is fair: true
20:14:36.696 INFO FSNamesystem - Detailed lock hold time metrics enabled: false
20:14:36.696 INFO FSNamesystem - fsOwner = runner (auth:SIMPLE)
20:14:36.696 INFO FSNamesystem - supergroup = supergroup
20:14:36.696 INFO FSNamesystem - isPermissionEnabled = true
20:14:36.696 INFO FSNamesystem - isStoragePolicyEnabled = true
20:14:36.696 INFO FSNamesystem - HA Enabled: false
20:14:36.696 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:14:36.696 INFO DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
20:14:36.696 INFO DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
20:14:36.696 INFO BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
20:14:36.696 INFO BlockManager - The block deletion will start around 2025 Feb 10 20:14:36
20:14:36.696 INFO GSet - Computing capacity for map BlocksMap
20:14:36.696 INFO GSet - VM type = 64-bit
20:14:36.696 INFO GSet - 2.0% max memory 3.4 GB = 70 MB
20:14:36.696 INFO GSet - capacity = 2^23 = 8388608 entries
20:14:36.701 INFO BlockManager - Storage policy satisfier is disabled
20:14:36.701 INFO BlockManager - dfs.block.access.token.enable = false
20:14:36.701 INFO BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
20:14:36.701 INFO BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
20:14:36.701 INFO BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
20:14:36.701 INFO BlockManager - defaultReplication = 1
20:14:36.701 INFO BlockManager - maxReplication = 512
20:14:36.701 INFO BlockManager - minReplication = 1
20:14:36.701 INFO BlockManager - maxReplicationStreams = 2
20:14:36.701 INFO BlockManager - redundancyRecheckInterval = 3000ms
20:14:36.701 INFO BlockManager - encryptDataTransfer = false
20:14:36.701 INFO BlockManager - maxNumBlocksToLog = 1000
20:14:36.701 INFO GSet - Computing capacity for map INodeMap
20:14:36.701 INFO GSet - VM type = 64-bit
20:14:36.701 INFO GSet - 1.0% max memory 3.4 GB = 35 MB
20:14:36.701 INFO GSet - capacity = 2^22 = 4194304 entries
20:14:36.704 INFO FSDirectory - ACLs enabled? true
20:14:36.704 INFO FSDirectory - POSIX ACL inheritance enabled? true
20:14:36.704 INFO FSDirectory - XAttrs enabled? true
20:14:36.704 INFO NameNode - Caching file names occurring more than 10 times
20:14:36.705 INFO SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
20:14:36.705 INFO SnapshotManager - SkipList is disabled
20:14:36.705 INFO GSet - Computing capacity for map cachedBlocks
20:14:36.705 INFO GSet - VM type = 64-bit
20:14:36.705 INFO GSet - 0.25% max memory 3.4 GB = 8.8 MB
20:14:36.705 INFO GSet - capacity = 2^20 = 1048576 entries
20:14:36.705 INFO TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
20:14:36.705 INFO TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
20:14:36.705 INFO TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
20:14:36.705 INFO FSNamesystem - Retry cache on namenode is enabled
20:14:36.705 INFO FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
20:14:36.705 INFO GSet - Computing capacity for map NameNodeRetryCache
20:14:36.705 INFO GSet - VM type = 64-bit
20:14:36.705 INFO GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
20:14:36.705 INFO GSet - capacity = 2^17 = 131072 entries
20:14:36.706 INFO Storage - Lock on /tmp/minicluster_storage15101762250106709887/name-0-1/in_use.lock acquired by nodename 14919@fv-az1120-225
20:14:36.707 INFO Storage - Lock on /tmp/minicluster_storage15101762250106709887/name-0-2/in_use.lock acquired by nodename 14919@fv-az1120-225
20:14:36.708 INFO FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage15101762250106709887/name-0-1/current
20:14:36.708 INFO FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage15101762250106709887/name-0-2/current
20:14:36.708 INFO FSImage - No edit log streams selected.
20:14:36.708 INFO FSImage - Planning to load image: FSImageFile(file=/tmp/minicluster_storage15101762250106709887/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
20:14:36.708 INFO FSImageFormatPBINode - Loading 1 INodes.
20:14:36.709 INFO FSImageFormatPBINode - Successfully loaded 1 inodes
20:14:36.709 INFO FSImageFormatPBINode - Completed update blocks map and name cache, total waiting duration 0ms.
20:14:36.709 INFO FSImageFormatProtobuf - Loaded FSImage in 0 seconds.
20:14:36.709 INFO FSImage - Loaded image for txid 0 from /tmp/minicluster_storage15101762250106709887/name-0-1/current/fsimage_0000000000000000000
20:14:36.709 INFO FSNamesystem - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
20:14:36.709 INFO FSEditLog - Starting log segment at 1
20:14:36.712 INFO NameCache - initialized with 0 entries 0 lookups
20:14:36.712 INFO FSNamesystem - Finished loading FSImage in 7 msecs
20:14:36.712 INFO NameNode - RPC server is binding to localhost:0
20:14:36.712 INFO NameNode - Enable NameNode state context:false
20:14:36.712 INFO CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
20:14:36.712 INFO Server - Listener at localhost:41449
20:14:36.713 INFO Server - Starting Socket Reader #1 for port 0
20:14:36.714 INFO NameNode - Clients are to use localhost:41449 to access this namenode/service.
20:14:36.714 INFO FSNamesystem - Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans.
20:14:36.725 INFO LeaseManager - Number of blocks under construction: 0
20:14:36.725 INFO DatanodeAdminDefaultMonitor - Initialized the Default Decommission and Maintenance monitor
20:14:36.725 INFO BlockManager - Start MarkedDeleteBlockScrubber thread
20:14:36.726 INFO BlockManager - initializing replication queues
20:14:36.726 INFO StateChange - STATE* Leaving safe mode after 0 secs
20:14:36.726 INFO StateChange - STATE* Network topology has 0 racks and 0 datanodes
20:14:36.726 INFO StateChange - STATE* UnderReplicatedBlocks has 0 blocks
20:14:36.728 INFO Server - IPC Server Responder: starting
20:14:36.728 INFO Server - IPC Server listener on 0: starting
20:14:36.730 INFO NameNode - NameNode RPC up at: localhost/127.0.0.1:41449
20:14:36.730 INFO BlockManager - Total number of blocks = 0
20:14:36.730 INFO BlockManager - Number of invalid blocks = 0
20:14:36.730 INFO BlockManager - Number of under-replicated blocks = 0
20:14:36.730 INFO BlockManager - Number of over-replicated blocks = 0
20:14:36.730 INFO BlockManager - Number of blocks being written = 0
20:14:36.730 INFO StateChange - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 5 msec
20:14:36.730 WARN MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
20:14:36.730 INFO FSNamesystem - Starting services required for active state
20:14:36.730 INFO FSDirectory - Initializing quota with 12 thread(s)
20:14:36.731 INFO FSDirectory - Quota initialization completed in 0 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
20:14:36.731 INFO CacheReplicationMonitor - Starting CacheReplicationMonitor with interval 30000 milliseconds
20:14:36.732 INFO MiniDFSCluster - Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/tmp/minicluster_storage15101762250106709887/data/data1,[DISK]file:/tmp/minicluster_storage15101762250106709887/data/data2
20:14:36.732 INFO ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage15101762250106709887/data/data1
20:14:36.733 INFO ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage15101762250106709887/data/data2
20:14:36.743 INFO MetricsSystemImpl - DataNode metrics system started (again)
20:14:36.743 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:14:36.743 INFO BlockScanner - Initialized block scanner with targetBytesPerSec 1048576
20:14:36.743 INFO DataNode - Configured hostname is 127.0.0.1
20:14:36.743 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
20:14:36.743 INFO DataNode - Starting DataNode with maxLockedMemory = 0
20:14:36.744 INFO DataNode - Opened streaming server at /127.0.0.1:42209
20:14:36.744 INFO DataNode - Balancing bandwidth is 104857600 bytes/s
20:14:36.744 INFO DataNode - Number threads for balancing is 100
20:14:36.745 WARN AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
20:14:36.746 WARN HttpRequestLog - Jetty request log can only be enabled using Log4j
20:14:36.746 INFO HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
20:14:36.747 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
20:14:36.747 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
20:14:36.747 INFO HttpServer2 - Jetty bound to port 45479
20:14:36.747 INFO Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
20:14:36.748 INFO session - DefaultSessionIdManager workerName=node0
20:14:36.748 INFO session - No SessionScavenger set, using defaults
20:14:36.748 INFO session - node0 Scavenging every 660000ms
20:14:36.748 INFO ContextHandler - Started o.e.j.s.ServletContextHandler@7824da7b{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
20:14:36.843 INFO ContextHandler - Started o.e.j.w.WebAppContext@5ea3b435{datanode,/,file:///tmp/jetty-localhost-45479-hadoop-hdfs-3_3_6-tests_jar-_-any-2822256258062057914/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
20:14:36.844 INFO AbstractConnector - Started ServerConnector@95cba80{HTTP/1.1, (http/1.1)}{localhost:45479}
20:14:36.844 INFO Server - Started @81691ms
20:14:36.845 WARN DatanodeHttpServer - Got null for restCsrfPreventionFilter - will not do any filtering.
20:14:36.845 INFO DatanodeHttpServer - Listening HTTP traffic on /127.0.0.1:44721
20:14:36.845 INFO DataNode - dnUserName = runner
20:14:36.845 INFO DataNode - supergroup = supergroup
20:14:36.846 INFO JvmPauseMonitor - Starting JVM pause monitor
20:14:36.846 INFO CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
20:14:36.846 INFO Server - Listener at localhost:40773
20:14:36.846 INFO Server - Starting Socket Reader #1 for port 0
20:14:36.848 INFO DataNode - Opened IPC server at /127.0.0.1:40773
20:14:36.858 INFO DataNode - Refresh request received for nameservices: null
20:14:36.858 INFO DataNode - Starting BPOfferServices for nameservices: <default>
20:14:36.858 WARN MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
20:14:36.859 INFO DataNode - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:41449 starting to offer service
20:14:36.859 INFO Server - IPC Server Responder: starting
20:14:36.859 INFO Server - IPC Server listener on 0: starting
20:14:36.861 INFO DataNode - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:41449
20:14:36.861 INFO Storage - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2)
20:14:36.862 INFO Storage - Lock on /tmp/minicluster_storage15101762250106709887/data/data1/in_use.lock acquired by nodename 14919@fv-az1120-225
20:14:36.862 INFO Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage15101762250106709887/data/data1 is not formatted for namespace 1020491715. Formatting...
20:14:36.862 INFO Storage - Generated new storageID DS-e2013f10-bf17-4e60-8bd9-63c4bb35ed09 for directory /tmp/minicluster_storage15101762250106709887/data/data1
20:14:36.863 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=datanodeReport src=null dst=null perm=null proto=rpc
20:14:36.863 INFO MiniDFSCluster - dnInfo.length != numDataNodes
20:14:36.863 INFO MiniDFSCluster - Waiting for cluster to become active
20:14:36.864 INFO Storage - Lock on /tmp/minicluster_storage15101762250106709887/data/data2/in_use.lock acquired by nodename 14919@fv-az1120-225
20:14:36.864 INFO Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage15101762250106709887/data/data2 is not formatted for namespace 1020491715. Formatting...
20:14:36.864 INFO Storage - Generated new storageID DS-634be1e1-96bd-41ed-886c-eb197e800120 for directory /tmp/minicluster_storage15101762250106709887/data/data2
20:14:36.877 INFO Storage - Analyzing storage directories for bpid BP-736909785-10.1.0.79-1739218476562
20:14:36.877 INFO Storage - Locking is disabled for /tmp/minicluster_storage15101762250106709887/data/data1/current/BP-736909785-10.1.0.79-1739218476562
20:14:36.877 INFO Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage15101762250106709887/data/data1 and block pool id BP-736909785-10.1.0.79-1739218476562 is not formatted. Formatting ...
20:14:36.877 INFO Storage - Formatting block pool BP-736909785-10.1.0.79-1739218476562 directory /tmp/minicluster_storage15101762250106709887/data/data1/current/BP-736909785-10.1.0.79-1739218476562/current
20:14:36.890 INFO Storage - Analyzing storage directories for bpid BP-736909785-10.1.0.79-1739218476562
20:14:36.890 INFO Storage - Locking is disabled for /tmp/minicluster_storage15101762250106709887/data/data2/current/BP-736909785-10.1.0.79-1739218476562
20:14:36.890 INFO Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage15101762250106709887/data/data2 and block pool id BP-736909785-10.1.0.79-1739218476562 is not formatted. Formatting ...
20:14:36.890 INFO Storage - Formatting block pool BP-736909785-10.1.0.79-1739218476562 directory /tmp/minicluster_storage15101762250106709887/data/data2/current/BP-736909785-10.1.0.79-1739218476562/current
20:14:36.891 INFO DataNode - Setting up storage: nsid=1020491715;bpid=BP-736909785-10.1.0.79-1739218476562;lv=-57;nsInfo=lv=-66;cid=testClusterID;nsid=1020491715;c=1739218476562;bpid=BP-736909785-10.1.0.79-1739218476562;dnuuid=null
20:14:36.892 INFO DataNode - Generated and persisted new Datanode UUID c0fb7960-c6a4-4993-aba6-555002b65303
20:14:36.892 INFO FsDatasetImpl - The datanode lock is a read write lock
20:14:36.893 INFO FsDatasetImpl - Added new volume: DS-e2013f10-bf17-4e60-8bd9-63c4bb35ed09
20:14:36.893 INFO FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage15101762250106709887/data/data1, StorageType: DISK
20:14:36.894 INFO FsDatasetImpl - Added new volume: DS-634be1e1-96bd-41ed-886c-eb197e800120
20:14:36.894 INFO FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage15101762250106709887/data/data2, StorageType: DISK
20:14:36.894 INFO MemoryMappableBlockLoader - Initializing cache loader: MemoryMappableBlockLoader.
20:14:36.894 INFO FsDatasetImpl - Registered FSDatasetState MBean
20:14:36.894 INFO FsDatasetImpl - Adding block pool BP-736909785-10.1.0.79-1739218476562
20:14:36.895 INFO FsDatasetImpl - Scanning block pool BP-736909785-10.1.0.79-1739218476562 on volume /tmp/minicluster_storage15101762250106709887/data/data1...
20:14:36.895 INFO FsDatasetImpl - Scanning block pool BP-736909785-10.1.0.79-1739218476562 on volume /tmp/minicluster_storage15101762250106709887/data/data2...
20:14:36.895 WARN FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage15101762250106709887/data/data1/current/BP-736909785-10.1.0.79-1739218476562/current, will proceed with Du for space computation calculation,
20:14:36.895 WARN FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage15101762250106709887/data/data2/current/BP-736909785-10.1.0.79-1739218476562/current, will proceed with Du for space computation calculation,
20:14:36.907 INFO FsDatasetImpl - Time taken to scan block pool BP-736909785-10.1.0.79-1739218476562 on /tmp/minicluster_storage15101762250106709887/data/data1: 12ms
20:14:36.909 INFO FsDatasetImpl - Time taken to scan block pool BP-736909785-10.1.0.79-1739218476562 on /tmp/minicluster_storage15101762250106709887/data/data2: 14ms
20:14:36.909 INFO FsDatasetImpl - Total time to scan all replicas for block pool BP-736909785-10.1.0.79-1739218476562: 14ms
20:14:36.909 INFO FsDatasetImpl - Adding replicas to map for block pool BP-736909785-10.1.0.79-1739218476562 on volume /tmp/minicluster_storage15101762250106709887/data/data1...
20:14:36.909 INFO BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage15101762250106709887/data/data1/current/BP-736909785-10.1.0.79-1739218476562/current/replicas doesn't exist
20:14:36.909 INFO FsDatasetImpl - Adding replicas to map for block pool BP-736909785-10.1.0.79-1739218476562 on volume /tmp/minicluster_storage15101762250106709887/data/data2...
20:14:36.909 INFO BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage15101762250106709887/data/data2/current/BP-736909785-10.1.0.79-1739218476562/current/replicas doesn't exist
20:14:36.909 INFO FsDatasetImpl - Time to add replicas to map for block pool BP-736909785-10.1.0.79-1739218476562 on volume /tmp/minicluster_storage15101762250106709887/data/data2: 0ms
20:14:36.909 INFO FsDatasetImpl - Time to add replicas to map for block pool BP-736909785-10.1.0.79-1739218476562 on volume /tmp/minicluster_storage15101762250106709887/data/data1: 1ms
20:14:36.909 INFO FsDatasetImpl - Total time to add all replicas to map for block pool BP-736909785-10.1.0.79-1739218476562: 1ms
20:14:36.909 INFO ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage15101762250106709887/data/data1
20:14:36.909 INFO DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage15101762250106709887/data/data1
20:14:36.909 INFO ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage15101762250106709887/data/data2
20:14:36.910 INFO DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage15101762250106709887/data/data2
20:14:36.910 INFO VolumeScanner - Now scanning bpid BP-736909785-10.1.0.79-1739218476562 on volume /tmp/minicluster_storage15101762250106709887/data/data2
20:14:36.910 WARN DirectoryScanner - dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value above 1000 ms/sec. Assuming default value of -1
20:14:36.910 INFO VolumeScanner - Now scanning bpid BP-736909785-10.1.0.79-1739218476562 on volume /tmp/minicluster_storage15101762250106709887/data/data1
20:14:36.910 INFO DirectoryScanner - Periodic Directory Tree Verification scan starting in 3862343ms with interval of 21600000ms and throttle limit of -1ms/s
20:14:36.910 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage15101762250106709887/data/data2, DS-634be1e1-96bd-41ed-886c-eb197e800120): finished scanning block pool BP-736909785-10.1.0.79-1739218476562
20:14:36.910 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage15101762250106709887/data/data1, DS-e2013f10-bf17-4e60-8bd9-63c4bb35ed09): finished scanning block pool BP-736909785-10.1.0.79-1739218476562
20:14:36.910 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage15101762250106709887/data/data1, DS-e2013f10-bf17-4e60-8bd9-63c4bb35ed09): no suitable block pools found to scan. Waiting 1814400000 ms.
20:14:36.910 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage15101762250106709887/data/data2, DS-634be1e1-96bd-41ed-886c-eb197e800120): no suitable block pools found to scan. Waiting 1814400000 ms.
20:14:36.910 INFO DataNode - Block pool BP-736909785-10.1.0.79-1739218476562 (Datanode Uuid c0fb7960-c6a4-4993-aba6-555002b65303) service to localhost/127.0.0.1:41449 beginning handshake with NN
20:14:36.911 INFO StateChange - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:42209, datanodeUuid=c0fb7960-c6a4-4993-aba6-555002b65303, infoPort=44721, infoSecurePort=0, ipcPort=40773, storageInfo=lv=-57;cid=testClusterID;nsid=1020491715;c=1739218476562) storage c0fb7960-c6a4-4993-aba6-555002b65303
20:14:36.911 INFO NetworkTopology - Adding a new node: /default-rack/127.0.0.1:42209
20:14:36.911 INFO BlockReportLeaseManager - Registered DN c0fb7960-c6a4-4993-aba6-555002b65303 (127.0.0.1:42209).
20:14:36.911 INFO DataNode - Block pool BP-736909785-10.1.0.79-1739218476562 (Datanode Uuid c0fb7960-c6a4-4993-aba6-555002b65303) service to localhost/127.0.0.1:41449 successfully registered with NN
20:14:36.911 INFO DataNode - For namenode localhost/127.0.0.1:41449 using BLOCKREPORT_INTERVAL of 21600000msecs CACHEREPORT_INTERVAL of 10000msecs Initial delay: 0msecs; heartBeatInterval=3000
20:14:36.911 INFO DataNode - Starting IBR Task Handler.
20:14:36.912 INFO DatanodeDescriptor - Adding new storage ID DS-e2013f10-bf17-4e60-8bd9-63c4bb35ed09 for DN 127.0.0.1:42209
20:14:36.912 INFO DatanodeDescriptor - Adding new storage ID DS-634be1e1-96bd-41ed-886c-eb197e800120 for DN 127.0.0.1:42209
20:14:36.912 INFO DataNode - After receiving heartbeat response, updating state of namenode localhost:41449 to active
20:14:36.913 INFO BlockStateChange - BLOCK* processReport 0x6cec88482a636c09 with lease ID 0x9f7da5a135b3a3f8: Processing first storage report for DS-e2013f10-bf17-4e60-8bd9-63c4bb35ed09 from datanode DatanodeRegistration(127.0.0.1:42209, datanodeUuid=c0fb7960-c6a4-4993-aba6-555002b65303, infoPort=44721, infoSecurePort=0, ipcPort=40773, storageInfo=lv=-57;cid=testClusterID;nsid=1020491715;c=1739218476562)
20:14:36.913 INFO BlockStateChange - BLOCK* processReport 0x6cec88482a636c09 with lease ID 0x9f7da5a135b3a3f8: from storage DS-e2013f10-bf17-4e60-8bd9-63c4bb35ed09 node DatanodeRegistration(127.0.0.1:42209, datanodeUuid=c0fb7960-c6a4-4993-aba6-555002b65303, infoPort=44721, infoSecurePort=0, ipcPort=40773, storageInfo=lv=-57;cid=testClusterID;nsid=1020491715;c=1739218476562), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0
20:14:36.913 INFO BlockStateChange - BLOCK* processReport 0x6cec88482a636c09 with lease ID 0x9f7da5a135b3a3f8: Processing first storage report for DS-634be1e1-96bd-41ed-886c-eb197e800120 from datanode DatanodeRegistration(127.0.0.1:42209, datanodeUuid=c0fb7960-c6a4-4993-aba6-555002b65303, infoPort=44721, infoSecurePort=0, ipcPort=40773, storageInfo=lv=-57;cid=testClusterID;nsid=1020491715;c=1739218476562)
20:14:36.913 INFO BlockStateChange - BLOCK* processReport 0x6cec88482a636c09 with lease ID 0x9f7da5a135b3a3f8: from storage DS-634be1e1-96bd-41ed-886c-eb197e800120 node DatanodeRegistration(127.0.0.1:42209, datanodeUuid=c0fb7960-c6a4-4993-aba6-555002b65303, infoPort=44721, infoSecurePort=0, ipcPort=40773, storageInfo=lv=-57;cid=testClusterID;nsid=1020491715;c=1739218476562), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
20:14:36.913 INFO DataNode - Successfully sent block report 0x6cec88482a636c09 with lease ID 0x9f7da5a135b3a3f8 to namenode: localhost/127.0.0.1:41449, containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 0 msecs to generate and 1 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
20:14:36.913 INFO DataNode - Got finalize command for block pool BP-736909785-10.1.0.79-1739218476562
20:14:36.963 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=datanodeReport src=null dst=null perm=null proto=rpc
20:14:36.964 INFO MiniDFSCluster - Cluster is active
20:14:36.965 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
20:14:36.966 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/user/runner/hdfs.bam dst=null perm=runner:supergroup:rw-r--r-- proto=rpc
20:14:36.967 INFO StateChange - BLOCK* allocate blk_1073741825_1001, replicas=127.0.0.1:42209 for /user/runner/hdfs.bam
20:14:36.968 INFO DataNode - Receiving BP-736909785-10.1.0.79-1739218476562:blk_1073741825_1001 src: /127.0.0.1:52462 dest: /127.0.0.1:42209
20:14:36.969 INFO clienttrace - src: /127.0.0.1:52462, dest: /127.0.0.1:42209, bytes: 2396, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-854735441_1, offset: 0, srvID: c0fb7960-c6a4-4993-aba6-555002b65303, blockid: BP-736909785-10.1.0.79-1739218476562:blk_1073741825_1001, duration(ns): 422478
20:14:36.970 INFO DataNode - PacketResponder: BP-736909785-10.1.0.79-1739218476562:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating
20:14:36.970 INFO FSNamesystem - BLOCK* blk_1073741825_1001 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /user/runner/hdfs.bam
20:14:37.371 INFO StateChange - DIR* completeFile: /user/runner/hdfs.bam is closed by DFSClient_NONMAPREDUCE_-854735441_1
20:14:37.372 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
20:14:37.372 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/user/runner/hdfs.bai dst=null perm=runner:supergroup:rw-r--r-- proto=rpc
20:14:37.374 INFO StateChange - BLOCK* allocate blk_1073741826_1002, replicas=127.0.0.1:42209 for /user/runner/hdfs.bai
20:14:37.374 INFO DataNode - Receiving BP-736909785-10.1.0.79-1739218476562:blk_1073741826_1002 src: /127.0.0.1:52464 dest: /127.0.0.1:42209
20:14:37.375 INFO clienttrace - src: /127.0.0.1:52464, dest: /127.0.0.1:42209, bytes: 656, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-854735441_1, offset: 0, srvID: c0fb7960-c6a4-4993-aba6-555002b65303, blockid: BP-736909785-10.1.0.79-1739218476562:blk_1073741826_1002, duration(ns): 439766
20:14:37.375 INFO DataNode - PacketResponder: BP-736909785-10.1.0.79-1739218476562:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating
20:14:37.376 INFO FSNamesystem - BLOCK* blk_1073741826_1002 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /user/runner/hdfs.bai
20:14:37.777 INFO StateChange - DIR* completeFile: /user/runner/hdfs.bai is closed by DFSClient_NONMAPREDUCE_-854735441_1
20:14:37.779 INFO MemoryStore - Block broadcast_612 stored as values in memory (estimated size 298.3 KiB, free 1919.7 MiB)
20:14:37.785 INFO MemoryStore - Block broadcast_612_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
20:14:37.785 INFO BlockManagerInfo - Added broadcast_612_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1920.0 MiB)
20:14:37.785 INFO SparkContext - Created broadcast 612 from newAPIHadoopFile at PathSplitSource.java:96
20:14:37.805 INFO FileInputFormat - Total input files to process : 1
20:14:37.841 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:212
20:14:37.841 INFO DAGScheduler - Got job 226 (collect at ReadsSparkSourceUnitTest.java:212) with 1 output partitions
20:14:37.841 INFO DAGScheduler - Final stage: ResultStage 290 (collect at ReadsSparkSourceUnitTest.java:212)
20:14:37.841 INFO DAGScheduler - Parents of final stage: List()
20:14:37.841 INFO DAGScheduler - Missing parents: List()
20:14:37.841 INFO DAGScheduler - Submitting ResultStage 290 (MapPartitionsRDD[1476] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:37.857 INFO MemoryStore - Block broadcast_613 stored as values in memory (estimated size 426.4 KiB, free 1919.2 MiB)
20:14:37.859 INFO MemoryStore - Block broadcast_613_piece0 stored as bytes in memory (estimated size 154.0 KiB, free 1919.1 MiB)
20:14:37.859 INFO BlockManagerInfo - Added broadcast_613_piece0 in memory on localhost:35739 (size: 154.0 KiB, free: 1919.8 MiB)
20:14:37.859 INFO SparkContext - Created broadcast 613 from broadcast at DAGScheduler.scala:1580
20:14:37.859 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 290 (MapPartitionsRDD[1476] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:37.859 INFO TaskSchedulerImpl - Adding task set 290.0 with 1 tasks resource profile 0
20:14:37.860 INFO TaskSetManager - Starting task 0.0 in stage 290.0 (TID 377) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7899 bytes)
20:14:37.860 INFO Executor - Running task 0.0 in stage 290.0 (TID 377)
20:14:37.889 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/engine/spark/datasources/ReadsSparkSource/hdfs_file_test.bam:0+2396
20:14:37.891 INFO Executor - Finished task 0.0 in stage 290.0 (TID 377). 5288 bytes result sent to driver
20:14:37.891 INFO TaskSetManager - Finished task 0.0 in stage 290.0 (TID 377) in 32 ms on localhost (executor driver) (1/1)
20:14:37.891 INFO TaskSchedulerImpl - Removed TaskSet 290.0, whose tasks have all completed, from pool
20:14:37.892 INFO DAGScheduler - ResultStage 290 (collect at ReadsSparkSourceUnitTest.java:212) finished in 0.051 s
20:14:37.892 INFO DAGScheduler - Job 226 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:37.892 INFO TaskSchedulerImpl - Killing all running tasks in stage 290: Stage finished
20:14:37.892 INFO DAGScheduler - Job 226 finished: collect at ReadsSparkSourceUnitTest.java:212, took 0.051013 s
20:14:37.892 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
20:14:37.893 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
20:14:37.893 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
20:14:37.893 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
20:14:37.894 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam.bai dst=null perm=null proto=rpc
20:14:37.894 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
20:14:37.895 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
20:14:37.895 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
20:14:37.897 WARN DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
20:14:37.897 WARN DFSUtil - Unexpected value for data transfer bytes=664 duration=0
20:14:37.898 WARN DFSUtil - Unexpected value for data transfer bytes=664 duration=0
20:14:37.898 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam.sbi dst=null perm=null proto=rpc
20:14:37.899 INFO MemoryStore - Block broadcast_614 stored as values in memory (estimated size 298.2 KiB, free 1918.8 MiB)
20:14:37.905 INFO MemoryStore - Block broadcast_614_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.8 MiB)
20:14:37.905 INFO BlockManagerInfo - Added broadcast_614_piece0 in memory on localhost:35739 (size: 50.2 KiB, free: 1919.8 MiB)
20:14:37.905 INFO SparkContext - Created broadcast 614 from newAPIHadoopFile at PathSplitSource.java:96
20:14:37.925 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
20:14:37.925 INFO FileInputFormat - Total input files to process : 1
20:14:37.926 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
20:14:37.961 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:213
20:14:37.961 INFO DAGScheduler - Got job 227 (collect at ReadsSparkSourceUnitTest.java:213) with 1 output partitions
20:14:37.961 INFO DAGScheduler - Final stage: ResultStage 291 (collect at ReadsSparkSourceUnitTest.java:213)
20:14:37.962 INFO DAGScheduler - Parents of final stage: List()
20:14:37.962 INFO DAGScheduler - Missing parents: List()
20:14:37.962 INFO DAGScheduler - Submitting ResultStage 291 (MapPartitionsRDD[1483] at filter at ReadsSparkSource.java:96), which has no missing parents
20:14:37.979 INFO MemoryStore - Block broadcast_615 stored as values in memory (estimated size 426.4 KiB, free 1918.3 MiB)
20:14:37.980 INFO MemoryStore - Block broadcast_615_piece0 stored as bytes in memory (estimated size 154.0 KiB, free 1918.2 MiB)
20:14:37.980 INFO BlockManagerInfo - Added broadcast_615_piece0 in memory on localhost:35739 (size: 154.0 KiB, free: 1919.6 MiB)
20:14:37.980 INFO SparkContext - Created broadcast 615 from broadcast at DAGScheduler.scala:1580
20:14:37.980 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 291 (MapPartitionsRDD[1483] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
20:14:37.980 INFO TaskSchedulerImpl - Adding task set 291.0 with 1 tasks resource profile 0
20:14:37.981 INFO TaskSetManager - Starting task 0.0 in stage 291.0 (TID 378) (localhost, executor driver, partition 0, ANY, 7796 bytes)
20:14:37.981 INFO Executor - Running task 0.0 in stage 291.0 (TID 378)
20:14:38.009 INFO NewHadoopRDD - Input split: hdfs://localhost:41449/user/runner/hdfs.bam:0+2396
20:14:38.010 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
20:14:38.010 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
20:14:38.011 WARN DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
20:14:38.011 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
20:14:38.012 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
20:14:38.012 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam.bai dst=null perm=null proto=rpc
20:14:38.012 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
20:14:38.013 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
20:14:38.013 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
20:14:38.014 WARN DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
20:14:38.015 WARN DFSUtil - Unexpected value for data transfer bytes=664 duration=0
20:14:38.015 WARN DFSUtil - Unexpected value for data transfer bytes=664 duration=0
20:14:38.016 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
20:14:38.016 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
20:14:38.017 WARN DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
20:14:38.017 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
20:14:38.023 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
20:14:38.023 INFO BlockManagerInfo - Removed broadcast_612_piece0 on localhost:35739 in memory (size: 50.2 KiB, free: 1919.7 MiB)
20:14:38.024 INFO BlockManagerInfo - Removed broadcast_613_piece0 on localhost:35739 in memory (size: 154.0 KiB, free: 1919.8 MiB)
20:14:38.024 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam.bai dst=null perm=null proto=rpc
20:14:38.024 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
20:14:38.024 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
20:14:38.025 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
20:14:38.026 WARN DFSUtil - Unexpected value for data transfer bytes=664 duration=0
20:14:38.027 INFO Executor - Finished task 0.0 in stage 291.0 (TID 378). 5331 bytes result sent to driver
20:14:38.028 INFO TaskSetManager - Finished task 0.0 in stage 291.0 (TID 378) in 47 ms on localhost (executor driver) (1/1)
20:14:38.028 INFO TaskSchedulerImpl - Removed TaskSet 291.0, whose tasks have all completed, from pool
20:14:38.028 INFO DAGScheduler - ResultStage 291 (collect at ReadsSparkSourceUnitTest.java:213) finished in 0.066 s
20:14:38.028 INFO DAGScheduler - Job 227 is finished. Cancelling potential speculative or zombie tasks for this job
20:14:38.028 INFO TaskSchedulerImpl - Killing all running tasks in stage 291: Stage finished
20:14:38.028 INFO DAGScheduler - Job 227 finished: collect at ReadsSparkSourceUnitTest.java:213, took 0.066913 s
20:14:38.029 INFO MiniDFSCluster - Shutting down the Mini HDFS Cluster
20:14:38.029 INFO MiniDFSCluster - Shutting down DataNode 0
20:14:38.029 INFO DirectoryScanner - Shutdown has been called
20:14:38.029 INFO DataNode - Closing all peers.
20:14:38.029 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage15101762250106709887/data/data2, DS-634be1e1-96bd-41ed-886c-eb197e800120) exiting.
20:14:38.029 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage15101762250106709887/data/data1, DS-e2013f10-bf17-4e60-8bd9-63c4bb35ed09) exiting.
20:14:38.032 INFO ContextHandler - Stopped o.e.j.w.WebAppContext@5ea3b435{datanode,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
20:14:38.032 INFO AbstractConnector - Stopped ServerConnector@95cba80{HTTP/1.1, (http/1.1)}{localhost:0}
20:14:38.032 INFO session - node0 Stopped scavenging
20:14:38.032 INFO ContextHandler - Stopped o.e.j.s.ServletContextHandler@7824da7b{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
20:14:38.033 INFO DataNode - Waiting up to 30 seconds for transfer threads to complete
20:14:38.033 INFO Server - Stopping server on 40773
20:14:38.033 INFO Server - Stopping IPC Server listener on 0
20:14:38.034 INFO Server - Stopping IPC Server Responder
20:14:38.034 WARN IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
20:14:38.034 WARN DataNode - Ending block pool service for: Block pool BP-736909785-10.1.0.79-1739218476562 (Datanode Uuid c0fb7960-c6a4-4993-aba6-555002b65303) service to localhost/127.0.0.1:41449
20:14:38.034 INFO DataNode - Removed Block pool BP-736909785-10.1.0.79-1739218476562 (Datanode Uuid c0fb7960-c6a4-4993-aba6-555002b65303)
20:14:38.034 INFO FsDatasetImpl - Removing block pool BP-736909785-10.1.0.79-1739218476562
20:14:38.034 WARN IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
20:14:38.034 ERROR DataNode - Command processor encountered interrupt and exit.
20:14:38.034 WARN DataNode - Ending command processor service for: Thread[Command processor,5,main]
20:14:38.034 WARN CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
20:14:38.034 WARN CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
20:14:38.035 INFO FsDatasetAsyncDiskService - Shutting down all async disk service threads
20:14:38.035 INFO FsDatasetAsyncDiskService - All async disk service threads have been shut down
20:14:38.035 INFO RamDiskAsyncLazyPersistService - Shutting down all async lazy persist service threads
20:14:38.035 INFO RamDiskAsyncLazyPersistService - All async lazy persist service threads have been shut down
20:14:38.035 INFO DataNode - Shutdown complete.
20:14:38.035 INFO MiniDFSCluster - Shutting down the namenode
20:14:38.035 INFO FSNamesystem - Stopping services started for active state
20:14:38.035 INFO FSEditLog - Ending log segment 1, 13
20:14:38.035 INFO FSNamesystem - NameNodeEditLogRoller was interrupted, exiting
20:14:38.035 INFO FSNamesystem - LazyPersistFileScrubber was interrupted, exiting
20:14:38.035 INFO FSEditLog - Number of transactions: 14 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 4 Number of syncs: 11 SyncTimes(ms): 2 0
20:14:38.036 INFO FileJournalManager - Finalizing edits file /tmp/minicluster_storage15101762250106709887/name-0-1/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage15101762250106709887/name-0-1/current/edits_0000000000000000001-0000000000000000014
20:14:38.036 INFO FileJournalManager - Finalizing edits file /tmp/minicluster_storage15101762250106709887/name-0-2/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage15101762250106709887/name-0-2/current/edits_0000000000000000001-0000000000000000014
20:14:38.036 INFO FSEditLog - FSEditLogAsync was interrupted, exiting
20:14:38.036 INFO CacheReplicationMonitor - Shutting down CacheReplicationMonitor
20:14:38.036 INFO Server - Stopping server on 41449
20:14:38.037 INFO Server - Stopping IPC Server Responder
20:14:38.037 INFO Server - Stopping IPC Server listener on 0
20:14:38.037 INFO BlockManager - Stopping MarkedDeleteBlockScrubber.
20:14:38.037 INFO BlockManager - Stopping RedundancyMonitor.
20:14:38.043 INFO FSNamesystem - Stopping services started for active state
20:14:38.043 INFO FSNamesystem - Stopping services started for standby state
20:14:38.044 INFO ContextHandler - Stopped o.e.j.w.WebAppContext@44bd2e87{hdfs,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
20:14:38.045 INFO AbstractConnector - Stopped ServerConnector@5733e051{HTTP/1.1, (http/1.1)}{localhost:0}
20:14:38.045 INFO session - node0 Stopped scavenging
20:14:38.045 INFO ContextHandler - Stopped o.e.j.s.ServletContextHandler@86a7fe8{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
20:14:38.046 INFO MetricsSystemImpl - Stopping DataNode metrics system...
20:14:38.046 INFO MetricsSystemImpl - DataNode metrics system stopped.
20:14:38.046 INFO MetricsSystemImpl - DataNode metrics system shutdown complete.