18:46:55.523 INFO MemoryStore - Block broadcast_536 stored as values in memory (estimated size 297.9 KiB, free 1918.8 MiB)
18:46:55.529 INFO MemoryStore - Block broadcast_536_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.8 MiB)
18:46:55.530 INFO BlockManagerInfo - Added broadcast_536_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.8 MiB)
18:46:55.530 INFO SparkContext - Created broadcast 536 from newAPIHadoopFile at PathSplitSource.java:96
18:46:55.552 INFO MemoryStore - Block broadcast_537 stored as values in memory (estimated size 297.9 KiB, free 1918.5 MiB)
18:46:55.563 INFO MemoryStore - Block broadcast_537_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.4 MiB)
18:46:55.563 INFO BlockManagerInfo - Added broadcast_537_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.8 MiB)
18:46:55.563 INFO SparkContext - Created broadcast 537 from newAPIHadoopFile at PathSplitSource.java:96
18:46:55.586 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
18:46:55.586 INFO DAGScheduler - Got job 201 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
18:46:55.586 INFO DAGScheduler - Final stage: ResultStage 265 (collect at ReadsSparkSourceUnitTest.java:111)
18:46:55.586 INFO DAGScheduler - Parents of final stage: List()
18:46:55.586 INFO DAGScheduler - Missing parents: List()
18:46:55.586 INFO DAGScheduler - Submitting ResultStage 265 (ParallelCollectionRDD[1270] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
18:46:55.587 INFO MemoryStore - Block broadcast_538 stored as values in memory (estimated size 3.0 KiB, free 1918.4 MiB)
18:46:55.587 INFO MemoryStore - Block broadcast_538_piece0 stored as bytes in memory (estimated size 1777.0 B, free 1918.4 MiB)
18:46:55.587 INFO BlockManagerInfo - Added broadcast_538_piece0 in memory on localhost:45727 (size: 1777.0 B, free: 1919.7 MiB)
18:46:55.587 INFO SparkContext - Created broadcast 538 from broadcast at DAGScheduler.scala:1580
18:46:55.587 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 265 (ParallelCollectionRDD[1270] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
18:46:55.588 INFO TaskSchedulerImpl - Adding task set 265.0 with 4 tasks resource profile 0
18:46:55.588 INFO TaskSetManager - Starting task 0.0 in stage 265.0 (TID 321) (localhost, executor driver, partition 0, PROCESS_LOCAL, 47080 bytes)
18:46:55.589 INFO TaskSetManager - Starting task 1.0 in stage 265.0 (TID 322) (localhost, executor driver, partition 1, PROCESS_LOCAL, 47194 bytes)
18:46:55.589 INFO TaskSetManager - Starting task 2.0 in stage 265.0 (TID 323) (localhost, executor driver, partition 2, PROCESS_LOCAL, 47105 bytes)
18:46:55.589 INFO TaskSetManager - Starting task 3.0 in stage 265.0 (TID 324) (localhost, executor driver, partition 3, PROCESS_LOCAL, 47530 bytes)
18:46:55.590 INFO Executor - Running task 2.0 in stage 265.0 (TID 323)
18:46:55.590 INFO Executor - Running task 0.0 in stage 265.0 (TID 321)
18:46:55.590 INFO Executor - Running task 1.0 in stage 265.0 (TID 322)
18:46:55.591 INFO Executor - Finished task 2.0 in stage 265.0 (TID 323). 40337 bytes result sent to driver
18:46:55.591 INFO Executor - Running task 3.0 in stage 265.0 (TID 324)
18:46:55.592 INFO Executor - Finished task 1.0 in stage 265.0 (TID 322). 40426 bytes result sent to driver
18:46:55.592 INFO TaskSetManager - Finished task 2.0 in stage 265.0 (TID 323) in 3 ms on localhost (executor driver) (1/4)
18:46:55.593 INFO Executor - Finished task 3.0 in stage 265.0 (TID 324). 40762 bytes result sent to driver
18:46:55.593 INFO Executor - Finished task 0.0 in stage 265.0 (TID 321). 40398 bytes result sent to driver
18:46:55.593 INFO TaskSetManager - Finished task 1.0 in stage 265.0 (TID 322) in 4 ms on localhost (executor driver) (2/4)
18:46:55.593 INFO TaskSetManager - Finished task 0.0 in stage 265.0 (TID 321) in 5 ms on localhost (executor driver) (3/4)
18:46:55.594 INFO TaskSetManager - Finished task 3.0 in stage 265.0 (TID 324) in 5 ms on localhost (executor driver) (4/4)
18:46:55.594 INFO TaskSchedulerImpl - Removed TaskSet 265.0, whose tasks have all completed, from pool
18:46:55.594 INFO DAGScheduler - ResultStage 265 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.008 s
18:46:55.594 INFO DAGScheduler - Job 201 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:55.594 INFO TaskSchedulerImpl - Killing all running tasks in stage 265: Stage finished
18:46:55.594 INFO DAGScheduler - Job 201 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.008307 s
18:46:55.595 INFO FileInputFormat - Total input files to process : 1
18:46:55.637 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
18:46:55.637 INFO DAGScheduler - Got job 202 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
18:46:55.638 INFO DAGScheduler - Final stage: ResultStage 266 (collect at ReadsSparkSourceUnitTest.java:112)
18:46:55.638 INFO DAGScheduler - Parents of final stage: List()
18:46:55.638 INFO DAGScheduler - Missing parents: List()
18:46:55.638 INFO DAGScheduler - Submitting ResultStage 266 (MapPartitionsRDD[1277] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:55.664 INFO MemoryStore - Block broadcast_539 stored as values in memory (estimated size 426.2 KiB, free 1918.0 MiB)
18:46:55.666 INFO MemoryStore - Block broadcast_539_piece0 stored as bytes in memory (estimated size 153.6 KiB, free 1917.9 MiB)
18:46:55.666 INFO BlockManagerInfo - Added broadcast_539_piece0 in memory on localhost:45727 (size: 153.6 KiB, free: 1919.6 MiB)
18:46:55.666 INFO SparkContext - Created broadcast 539 from broadcast at DAGScheduler.scala:1580
18:46:55.666 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 266 (MapPartitionsRDD[1277] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:55.666 INFO TaskSchedulerImpl - Adding task set 266.0 with 1 tasks resource profile 0
18:46:55.667 INFO TaskSetManager - Starting task 0.0 in stage 266.0 (TID 325) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes)
18:46:55.667 INFO Executor - Running task 0.0 in stage 266.0 (TID 325)
18:46:55.697 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
18:46:55.700 INFO Executor - Finished task 0.0 in stage 266.0 (TID 325). 159572 bytes result sent to driver
18:46:55.701 INFO TaskSetManager - Finished task 0.0 in stage 266.0 (TID 325) in 34 ms on localhost (executor driver) (1/1)
18:46:55.701 INFO TaskSchedulerImpl - Removed TaskSet 266.0, whose tasks have all completed, from pool
18:46:55.701 INFO DAGScheduler - ResultStage 266 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.063 s
18:46:55.701 INFO DAGScheduler - Job 202 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:55.701 INFO TaskSchedulerImpl - Killing all running tasks in stage 266: Stage finished
18:46:55.701 INFO DAGScheduler - Job 202 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.063780 s
18:46:55.704 INFO MemoryStore - Block broadcast_540 stored as values in memory (estimated size 298.0 KiB, free 1917.6 MiB)
18:46:55.711 INFO MemoryStore - Block broadcast_540_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.5 MiB)
18:46:55.711 INFO BlockManagerInfo - Added broadcast_540_piece0 in memory on localhost:45727 (size: 50.3 KiB, free: 1919.5 MiB)
18:46:55.711 INFO SparkContext - Created broadcast 540 from newAPIHadoopFile at PathSplitSource.java:96
18:46:55.734 INFO MemoryStore - Block broadcast_541 stored as values in memory (estimated size 298.0 KiB, free 1917.2 MiB)
18:46:55.741 INFO MemoryStore - Block broadcast_541_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1917.2 MiB)
18:46:55.741 INFO BlockManagerInfo - Added broadcast_541_piece0 in memory on localhost:45727 (size: 50.3 KiB, free: 1919.5 MiB)
18:46:55.741 INFO SparkContext - Created broadcast 541 from newAPIHadoopFile at PathSplitSource.java:96
18:46:55.764 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
18:46:55.764 INFO DAGScheduler - Got job 203 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
18:46:55.764 INFO DAGScheduler - Final stage: ResultStage 267 (collect at ReadsSparkSourceUnitTest.java:111)
18:46:55.764 INFO DAGScheduler - Parents of final stage: List()
18:46:55.764 INFO DAGScheduler - Missing parents: List()
18:46:55.764 INFO DAGScheduler - Submitting ResultStage 267 (ParallelCollectionRDD[1283] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
18:46:55.764 INFO MemoryStore - Block broadcast_542 stored as values in memory (estimated size 3.0 KiB, free 1917.2 MiB)
18:46:55.765 INFO MemoryStore - Block broadcast_542_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1917.2 MiB)
18:46:55.765 INFO BlockManagerInfo - Added broadcast_542_piece0 in memory on localhost:45727 (size: 1778.0 B, free: 1919.5 MiB)
18:46:55.765 INFO SparkContext - Created broadcast 542 from broadcast at DAGScheduler.scala:1580
18:46:55.765 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 267 (ParallelCollectionRDD[1283] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
18:46:55.765 INFO TaskSchedulerImpl - Adding task set 267.0 with 4 tasks resource profile 0
18:46:55.766 INFO TaskSetManager - Starting task 0.0 in stage 267.0 (TID 326) (localhost, executor driver, partition 0, PROCESS_LOCAL, 169886 bytes)
18:46:55.766 INFO TaskSetManager - Starting task 1.0 in stage 267.0 (TID 327) (localhost, executor driver, partition 1, PROCESS_LOCAL, 170093 bytes)
18:46:55.767 INFO TaskSetManager - Starting task 2.0 in stage 267.0 (TID 328) (localhost, executor driver, partition 2, PROCESS_LOCAL, 170081 bytes)
18:46:55.767 INFO TaskSetManager - Starting task 3.0 in stage 267.0 (TID 329) (localhost, executor driver, partition 3, PROCESS_LOCAL, 169419 bytes)
18:46:55.767 INFO Executor - Running task 0.0 in stage 267.0 (TID 326)
18:46:55.767 INFO Executor - Running task 2.0 in stage 267.0 (TID 328)
18:46:55.767 INFO Executor - Running task 1.0 in stage 267.0 (TID 327)
18:46:55.767 INFO Executor - Running task 3.0 in stage 267.0 (TID 329)
18:46:55.769 INFO Executor - Finished task 1.0 in stage 267.0 (TID 327). 163368 bytes result sent to driver
18:46:55.769 INFO Executor - Finished task 3.0 in stage 267.0 (TID 329). 162694 bytes result sent to driver
18:46:55.770 INFO Executor - Finished task 2.0 in stage 267.0 (TID 328). 163399 bytes result sent to driver
18:46:55.770 INFO TaskSetManager - Finished task 3.0 in stage 267.0 (TID 329) in 3 ms on localhost (executor driver) (1/4)
18:46:55.771 INFO TaskSetManager - Finished task 1.0 in stage 267.0 (TID 327) in 5 ms on localhost (executor driver) (2/4)
18:46:55.771 INFO TaskSetManager - Finished task 2.0 in stage 267.0 (TID 328) in 5 ms on localhost (executor driver) (3/4)
18:46:55.772 INFO Executor - Finished task 0.0 in stage 267.0 (TID 326). 163161 bytes result sent to driver
18:46:55.772 INFO TaskSetManager - Finished task 0.0 in stage 267.0 (TID 326) in 7 ms on localhost (executor driver) (4/4)
18:46:55.772 INFO TaskSchedulerImpl - Removed TaskSet 267.0, whose tasks have all completed, from pool
18:46:55.772 INFO DAGScheduler - ResultStage 267 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.008 s
18:46:55.772 INFO DAGScheduler - Job 203 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:55.772 INFO TaskSchedulerImpl - Killing all running tasks in stage 267: Stage finished
18:46:55.772 INFO DAGScheduler - Job 203 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.008733 s
18:46:55.773 INFO FileInputFormat - Total input files to process : 1
18:46:55.809 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
18:46:55.809 INFO DAGScheduler - Got job 204 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
18:46:55.809 INFO DAGScheduler - Final stage: ResultStage 268 (collect at ReadsSparkSourceUnitTest.java:112)
18:46:55.809 INFO DAGScheduler - Parents of final stage: List()
18:46:55.809 INFO DAGScheduler - Missing parents: List()
18:46:55.809 INFO DAGScheduler - Submitting ResultStage 268 (MapPartitionsRDD[1290] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:55.826 INFO MemoryStore - Block broadcast_543 stored as values in memory (estimated size 426.2 KiB, free 1916.8 MiB)
18:46:55.827 INFO MemoryStore - Block broadcast_543_piece0 stored as bytes in memory (estimated size 153.6 KiB, free 1916.6 MiB)
18:46:55.827 INFO BlockManagerInfo - Added broadcast_543_piece0 in memory on localhost:45727 (size: 153.6 KiB, free: 1919.3 MiB)
18:46:55.827 INFO SparkContext - Created broadcast 543 from broadcast at DAGScheduler.scala:1580
18:46:55.828 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 268 (MapPartitionsRDD[1290] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:55.828 INFO TaskSchedulerImpl - Adding task set 268.0 with 1 tasks resource profile 0
18:46:55.828 INFO TaskSetManager - Starting task 0.0 in stage 268.0 (TID 330) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7883 bytes)
18:46:55.828 INFO Executor - Running task 0.0 in stage 268.0 (TID 330)
18:46:55.857 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam:0+221635
18:46:55.867 INFO Executor - Finished task 0.0 in stage 268.0 (TID 330). 650141 bytes result sent to driver
18:46:55.868 INFO TaskSetManager - Finished task 0.0 in stage 268.0 (TID 330) in 40 ms on localhost (executor driver) (1/1)
18:46:55.868 INFO TaskSchedulerImpl - Removed TaskSet 268.0, whose tasks have all completed, from pool
18:46:55.869 INFO DAGScheduler - ResultStage 268 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.059 s
18:46:55.869 INFO DAGScheduler - Job 204 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:55.869 INFO TaskSchedulerImpl - Killing all running tasks in stage 268: Stage finished
18:46:55.869 INFO DAGScheduler - Job 204 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.059810 s
18:46:55.872 INFO MemoryStore - Block broadcast_544 stored as values in memory (estimated size 298.0 KiB, free 1916.3 MiB)
18:46:55.881 INFO MemoryStore - Block broadcast_544_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1916.3 MiB)
18:46:55.881 INFO BlockManagerInfo - Added broadcast_544_piece0 in memory on localhost:45727 (size: 50.3 KiB, free: 1919.3 MiB)
18:46:55.881 INFO SparkContext - Created broadcast 544 from newAPIHadoopFile at PathSplitSource.java:96
18:46:55.904 INFO MemoryStore - Block broadcast_545 stored as values in memory (estimated size 298.0 KiB, free 1916.0 MiB)
18:46:55.910 INFO MemoryStore - Block broadcast_545_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1915.9 MiB)
18:46:55.910 INFO BlockManagerInfo - Added broadcast_545_piece0 in memory on localhost:45727 (size: 50.3 KiB, free: 1919.3 MiB)
18:46:55.911 INFO SparkContext - Created broadcast 545 from newAPIHadoopFile at PathSplitSource.java:96
18:46:55.933 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
18:46:55.933 INFO DAGScheduler - Got job 205 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
18:46:55.933 INFO DAGScheduler - Final stage: ResultStage 269 (collect at ReadsSparkSourceUnitTest.java:111)
18:46:55.933 INFO DAGScheduler - Parents of final stage: List()
18:46:55.933 INFO DAGScheduler - Missing parents: List()
18:46:55.933 INFO DAGScheduler - Submitting ResultStage 269 (ParallelCollectionRDD[1296] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
18:46:55.934 INFO MemoryStore - Block broadcast_546 stored as values in memory (estimated size 3.0 KiB, free 1915.9 MiB)
18:46:55.934 INFO MemoryStore - Block broadcast_546_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1915.9 MiB)
18:46:55.934 INFO BlockManagerInfo - Added broadcast_546_piece0 in memory on localhost:45727 (size: 1778.0 B, free: 1919.2 MiB)
18:46:55.934 INFO SparkContext - Created broadcast 546 from broadcast at DAGScheduler.scala:1580
18:46:55.934 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 269 (ParallelCollectionRDD[1296] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
18:46:55.934 INFO TaskSchedulerImpl - Adding task set 269.0 with 4 tasks resource profile 0
18:46:55.935 INFO TaskSetManager - Starting task 0.0 in stage 269.0 (TID 331) (localhost, executor driver, partition 0, PROCESS_LOCAL, 169886 bytes)
18:46:55.935 INFO TaskSetManager - Starting task 1.0 in stage 269.0 (TID 332) (localhost, executor driver, partition 1, PROCESS_LOCAL, 170093 bytes)
18:46:55.936 INFO TaskSetManager - Starting task 2.0 in stage 269.0 (TID 333) (localhost, executor driver, partition 2, PROCESS_LOCAL, 170081 bytes)
18:46:55.936 INFO TaskSetManager - Starting task 3.0 in stage 269.0 (TID 334) (localhost, executor driver, partition 3, PROCESS_LOCAL, 169419 bytes)
18:46:55.936 INFO Executor - Running task 1.0 in stage 269.0 (TID 332)
18:46:55.936 INFO Executor - Running task 0.0 in stage 269.0 (TID 331)
18:46:55.936 INFO Executor - Running task 2.0 in stage 269.0 (TID 333)
18:46:55.936 INFO Executor - Running task 3.0 in stage 269.0 (TID 334)
18:46:55.943 INFO Executor - Finished task 1.0 in stage 269.0 (TID 332). 163454 bytes result sent to driver
18:46:55.943 INFO Executor - Finished task 2.0 in stage 269.0 (TID 333). 163442 bytes result sent to driver
18:46:55.943 INFO Executor - Finished task 0.0 in stage 269.0 (TID 331). 163247 bytes result sent to driver
18:46:55.943 INFO BlockManagerInfo - Removed broadcast_538_piece0 on localhost:45727 in memory (size: 1777.0 B, free: 1919.3 MiB)
18:46:55.944 INFO Executor - Finished task 3.0 in stage 269.0 (TID 334). 162780 bytes result sent to driver
18:46:55.944 INFO TaskSetManager - Finished task 2.0 in stage 269.0 (TID 333) in 9 ms on localhost (executor driver) (1/4)
18:46:55.944 INFO TaskSetManager - Finished task 1.0 in stage 269.0 (TID 332) in 9 ms on localhost (executor driver) (2/4)
18:46:55.944 INFO TaskSetManager - Finished task 0.0 in stage 269.0 (TID 331) in 9 ms on localhost (executor driver) (3/4)
18:46:55.944 INFO TaskSetManager - Finished task 3.0 in stage 269.0 (TID 334) in 8 ms on localhost (executor driver) (4/4)
18:46:55.944 INFO TaskSchedulerImpl - Removed TaskSet 269.0, whose tasks have all completed, from pool
18:46:55.944 INFO DAGScheduler - ResultStage 269 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.011 s
18:46:55.944 INFO DAGScheduler - Job 205 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:55.944 INFO TaskSchedulerImpl - Killing all running tasks in stage 269: Stage finished
18:46:55.944 INFO BlockManagerInfo - Removed broadcast_537_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.3 MiB)
18:46:55.944 INFO DAGScheduler - Job 205 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.011524 s
18:46:55.945 INFO BlockManagerInfo - Removed broadcast_531_piece0 on localhost:45727 in memory (size: 233.0 B, free: 1919.3 MiB)
18:46:55.945 INFO BlockManagerInfo - Removed broadcast_536_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.3 MiB)
18:46:55.945 INFO FileInputFormat - Total input files to process : 1
18:46:55.947 INFO BlockManagerInfo - Removed broadcast_525_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.4 MiB)
18:46:55.947 INFO BlockManagerInfo - Removed broadcast_532_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.4 MiB)
18:46:55.948 INFO BlockManagerInfo - Removed broadcast_541_piece0 on localhost:45727 in memory (size: 50.3 KiB, free: 1919.5 MiB)
18:46:55.948 INFO BlockManagerInfo - Removed broadcast_544_piece0 on localhost:45727 in memory (size: 50.3 KiB, free: 1919.5 MiB)
18:46:55.948 INFO BlockManagerInfo - Removed broadcast_543_piece0 on localhost:45727 in memory (size: 153.6 KiB, free: 1919.7 MiB)
18:46:55.949 INFO BlockManagerInfo - Removed broadcast_540_piece0 on localhost:45727 in memory (size: 50.3 KiB, free: 1919.7 MiB)
18:46:55.949 INFO BlockManagerInfo - Removed broadcast_535_piece0 on localhost:45727 in memory (size: 54.5 KiB, free: 1919.8 MiB)
18:46:55.950 INFO BlockManagerInfo - Removed broadcast_542_piece0 on localhost:45727 in memory (size: 1778.0 B, free: 1919.8 MiB)
18:46:55.950 INFO BlockManagerInfo - Removed broadcast_539_piece0 on localhost:45727 in memory (size: 153.6 KiB, free: 1919.9 MiB)
18:46:55.989 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
18:46:55.989 INFO DAGScheduler - Got job 206 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
18:46:55.989 INFO DAGScheduler - Final stage: ResultStage 270 (collect at ReadsSparkSourceUnitTest.java:112)
18:46:55.989 INFO DAGScheduler - Parents of final stage: List()
18:46:55.989 INFO DAGScheduler - Missing parents: List()
18:46:55.989 INFO DAGScheduler - Submitting ResultStage 270 (MapPartitionsRDD[1303] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:56.006 INFO MemoryStore - Block broadcast_547 stored as values in memory (estimated size 426.2 KiB, free 1919.2 MiB)
18:46:56.007 INFO MemoryStore - Block broadcast_547_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1919.1 MiB)
18:46:56.007 INFO BlockManagerInfo - Added broadcast_547_piece0 in memory on localhost:45727 (size: 153.7 KiB, free: 1919.8 MiB)
18:46:56.008 INFO SparkContext - Created broadcast 547 from broadcast at DAGScheduler.scala:1580
18:46:56.008 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 270 (MapPartitionsRDD[1303] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:56.008 INFO TaskSchedulerImpl - Adding task set 270.0 with 1 tasks resource profile 0
18:46:56.008 INFO TaskSetManager - Starting task 0.0 in stage 270.0 (TID 335) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7912 bytes)
18:46:56.008 INFO Executor - Running task 0.0 in stage 270.0 (TID 335)
18:46:56.038 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.bam:0+216896
18:46:56.050 INFO Executor - Finished task 0.0 in stage 270.0 (TID 335). 650141 bytes result sent to driver
18:46:56.051 INFO TaskSetManager - Finished task 0.0 in stage 270.0 (TID 335) in 43 ms on localhost (executor driver) (1/1)
18:46:56.051 INFO TaskSchedulerImpl - Removed TaskSet 270.0, whose tasks have all completed, from pool
18:46:56.051 INFO DAGScheduler - ResultStage 270 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.061 s
18:46:56.051 INFO DAGScheduler - Job 206 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:56.051 INFO TaskSchedulerImpl - Killing all running tasks in stage 270: Stage finished
18:46:56.052 INFO DAGScheduler - Job 206 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.062634 s
18:46:56.054 INFO MemoryStore - Block broadcast_548 stored as values in memory (estimated size 536.0 B, free 1919.1 MiB)
18:46:56.054 INFO MemoryStore - Block broadcast_548_piece0 stored as bytes in memory (estimated size 187.0 B, free 1919.1 MiB)
18:46:56.054 INFO BlockManagerInfo - Added broadcast_548_piece0 in memory on localhost:45727 (size: 187.0 B, free: 1919.8 MiB)
18:46:56.054 INFO SparkContext - Created broadcast 548 from broadcast at CramSource.java:114
18:46:56.055 INFO MemoryStore - Block broadcast_549 stored as values in memory (estimated size 297.9 KiB, free 1918.8 MiB)
18:46:56.062 INFO MemoryStore - Block broadcast_549_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.7 MiB)
18:46:56.062 INFO BlockManagerInfo - Added broadcast_549_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.7 MiB)
18:46:56.062 INFO SparkContext - Created broadcast 549 from newAPIHadoopFile at PathSplitSource.java:96
18:46:56.081 INFO MemoryStore - Block broadcast_550 stored as values in memory (estimated size 536.0 B, free 1918.7 MiB)
18:46:56.081 INFO MemoryStore - Block broadcast_550_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.7 MiB)
18:46:56.081 INFO BlockManagerInfo - Added broadcast_550_piece0 in memory on localhost:45727 (size: 187.0 B, free: 1919.7 MiB)
18:46:56.082 INFO SparkContext - Created broadcast 550 from broadcast at CramSource.java:114
18:46:56.082 INFO MemoryStore - Block broadcast_551 stored as values in memory (estimated size 297.9 KiB, free 1918.5 MiB)
18:46:56.089 INFO MemoryStore - Block broadcast_551_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.4 MiB)
18:46:56.089 INFO BlockManagerInfo - Added broadcast_551_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.7 MiB)
18:46:56.089 INFO SparkContext - Created broadcast 551 from newAPIHadoopFile at PathSplitSource.java:96
18:46:56.106 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
18:46:56.107 INFO DAGScheduler - Got job 207 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
18:46:56.107 INFO DAGScheduler - Final stage: ResultStage 271 (collect at ReadsSparkSourceUnitTest.java:111)
18:46:56.107 INFO DAGScheduler - Parents of final stage: List()
18:46:56.107 INFO DAGScheduler - Missing parents: List()
18:46:56.107 INFO DAGScheduler - Submitting ResultStage 271 (ParallelCollectionRDD[1308] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
18:46:56.107 INFO MemoryStore - Block broadcast_552 stored as values in memory (estimated size 3.0 KiB, free 1918.4 MiB)
18:46:56.108 INFO MemoryStore - Block broadcast_552_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.4 MiB)
18:46:56.108 INFO BlockManagerInfo - Added broadcast_552_piece0 in memory on localhost:45727 (size: 1778.0 B, free: 1919.7 MiB)
18:46:56.108 INFO SparkContext - Created broadcast 552 from broadcast at DAGScheduler.scala:1580
18:46:56.108 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 271 (ParallelCollectionRDD[1308] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
18:46:56.108 INFO TaskSchedulerImpl - Adding task set 271.0 with 4 tasks resource profile 0
18:46:56.109 INFO TaskSetManager - Starting task 0.0 in stage 271.0 (TID 336) (localhost, executor driver, partition 0, PROCESS_LOCAL, 45675 bytes)
18:46:56.109 INFO TaskSetManager - Starting task 1.0 in stage 271.0 (TID 337) (localhost, executor driver, partition 1, PROCESS_LOCAL, 45855 bytes)
18:46:56.109 INFO TaskSetManager - Starting task 2.0 in stage 271.0 (TID 338) (localhost, executor driver, partition 2, PROCESS_LOCAL, 45795 bytes)
18:46:56.110 INFO TaskSetManager - Starting task 3.0 in stage 271.0 (TID 339) (localhost, executor driver, partition 3, PROCESS_LOCAL, 46068 bytes)
18:46:56.110 INFO Executor - Running task 0.0 in stage 271.0 (TID 336)
18:46:56.110 INFO Executor - Running task 1.0 in stage 271.0 (TID 337)
18:46:56.110 INFO Executor - Running task 2.0 in stage 271.0 (TID 338)
18:46:56.110 INFO Executor - Running task 3.0 in stage 271.0 (TID 339)
18:46:56.111 INFO Executor - Finished task 1.0 in stage 271.0 (TID 337). 39087 bytes result sent to driver
18:46:56.111 INFO Executor - Finished task 3.0 in stage 271.0 (TID 339). 39300 bytes result sent to driver
18:46:56.111 INFO Executor - Finished task 0.0 in stage 271.0 (TID 336). 38950 bytes result sent to driver
18:46:56.112 INFO Executor - Finished task 2.0 in stage 271.0 (TID 338). 39070 bytes result sent to driver
18:46:56.112 INFO TaskSetManager - Finished task 3.0 in stage 271.0 (TID 339) in 3 ms on localhost (executor driver) (1/4)
18:46:56.112 INFO TaskSetManager - Finished task 0.0 in stage 271.0 (TID 336) in 4 ms on localhost (executor driver) (2/4)
18:46:56.112 INFO TaskSetManager - Finished task 1.0 in stage 271.0 (TID 337) in 3 ms on localhost (executor driver) (3/4)
18:46:56.112 INFO TaskSetManager - Finished task 2.0 in stage 271.0 (TID 338) in 3 ms on localhost (executor driver) (4/4)
18:46:56.112 INFO TaskSchedulerImpl - Removed TaskSet 271.0, whose tasks have all completed, from pool
18:46:56.112 INFO DAGScheduler - ResultStage 271 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.005 s
18:46:56.112 INFO DAGScheduler - Job 207 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:56.112 INFO TaskSchedulerImpl - Killing all running tasks in stage 271: Stage finished
18:46:56.112 INFO DAGScheduler - Job 207 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.005964 s
18:46:56.113 INFO FileInputFormat - Total input files to process : 1
18:46:56.139 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
18:46:56.139 INFO DAGScheduler - Got job 208 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
18:46:56.139 INFO DAGScheduler - Final stage: ResultStage 272 (collect at ReadsSparkSourceUnitTest.java:112)
18:46:56.139 INFO DAGScheduler - Parents of final stage: List()
18:46:56.139 INFO DAGScheduler - Missing parents: List()
18:46:56.139 INFO DAGScheduler - Submitting ResultStage 272 (MapPartitionsRDD[1314] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:56.156 INFO MemoryStore - Block broadcast_553 stored as values in memory (estimated size 286.8 KiB, free 1918.1 MiB)
18:46:56.157 INFO MemoryStore - Block broadcast_553_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.0 MiB)
18:46:56.157 INFO BlockManagerInfo - Added broadcast_553_piece0 in memory on localhost:45727 (size: 103.6 KiB, free: 1919.6 MiB)
18:46:56.157 INFO SparkContext - Created broadcast 553 from broadcast at DAGScheduler.scala:1580
18:46:56.157 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 272 (MapPartitionsRDD[1314] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:56.157 INFO TaskSchedulerImpl - Adding task set 272.0 with 1 tasks resource profile 0
18:46:56.158 INFO TaskSetManager - Starting task 0.0 in stage 272.0 (TID 340) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes)
18:46:56.158 INFO Executor - Running task 0.0 in stage 272.0 (TID 340)
18:46:56.180 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
18:46:56.187 INFO Executor - Finished task 0.0 in stage 272.0 (TID 340). 154058 bytes result sent to driver
18:46:56.188 INFO TaskSetManager - Finished task 0.0 in stage 272.0 (TID 340) in 30 ms on localhost (executor driver) (1/1)
18:46:56.188 INFO TaskSchedulerImpl - Removed TaskSet 272.0, whose tasks have all completed, from pool
18:46:56.188 INFO DAGScheduler - ResultStage 272 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.049 s
18:46:56.188 INFO DAGScheduler - Job 208 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:56.188 INFO TaskSchedulerImpl - Killing all running tasks in stage 272: Stage finished
18:46:56.188 INFO DAGScheduler - Job 208 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.049765 s
18:46:56.192 INFO MemoryStore - Block broadcast_554 stored as values in memory (estimated size 608.0 B, free 1918.0 MiB)
18:46:56.192 INFO MemoryStore - Block broadcast_554_piece0 stored as bytes in memory (estimated size 206.0 B, free 1918.0 MiB)
18:46:56.192 INFO BlockManagerInfo - Added broadcast_554_piece0 in memory on localhost:45727 (size: 206.0 B, free: 1919.6 MiB)
18:46:56.193 INFO SparkContext - Created broadcast 554 from broadcast at CramSource.java:114
18:46:56.193 INFO MemoryStore - Block broadcast_555 stored as values in memory (estimated size 297.9 KiB, free 1917.7 MiB)
18:46:56.200 INFO MemoryStore - Block broadcast_555_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.7 MiB)
18:46:56.200 INFO BlockManagerInfo - Added broadcast_555_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.5 MiB)
18:46:56.200 INFO SparkContext - Created broadcast 555 from newAPIHadoopFile at PathSplitSource.java:96
18:46:56.218 INFO MemoryStore - Block broadcast_556 stored as values in memory (estimated size 608.0 B, free 1917.7 MiB)
18:46:56.218 INFO MemoryStore - Block broadcast_556_piece0 stored as bytes in memory (estimated size 206.0 B, free 1917.7 MiB)
18:46:56.218 INFO BlockManagerInfo - Added broadcast_556_piece0 in memory on localhost:45727 (size: 206.0 B, free: 1919.5 MiB)
18:46:56.218 INFO SparkContext - Created broadcast 556 from broadcast at CramSource.java:114
18:46:56.219 INFO MemoryStore - Block broadcast_557 stored as values in memory (estimated size 297.9 KiB, free 1917.4 MiB)
18:46:56.225 INFO MemoryStore - Block broadcast_557_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.3 MiB)
18:46:56.225 INFO BlockManagerInfo - Added broadcast_557_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.5 MiB)
18:46:56.225 INFO SparkContext - Created broadcast 557 from newAPIHadoopFile at PathSplitSource.java:96
18:46:56.242 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
18:46:56.242 INFO DAGScheduler - Got job 209 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
18:46:56.242 INFO DAGScheduler - Final stage: ResultStage 273 (collect at ReadsSparkSourceUnitTest.java:111)
18:46:56.242 INFO DAGScheduler - Parents of final stage: List()
18:46:56.242 INFO DAGScheduler - Missing parents: List()
18:46:56.242 INFO DAGScheduler - Submitting ResultStage 273 (ParallelCollectionRDD[1319] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
18:46:56.243 INFO MemoryStore - Block broadcast_558 stored as values in memory (estimated size 3.0 KiB, free 1917.3 MiB)
18:46:56.243 INFO MemoryStore - Block broadcast_558_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1917.3 MiB)
18:46:56.243 INFO BlockManagerInfo - Added broadcast_558_piece0 in memory on localhost:45727 (size: 1778.0 B, free: 1919.5 MiB)
18:46:56.243 INFO SparkContext - Created broadcast 558 from broadcast at DAGScheduler.scala:1580
18:46:56.243 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 273 (ParallelCollectionRDD[1319] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
18:46:56.244 INFO TaskSchedulerImpl - Adding task set 273.0 with 4 tasks resource profile 0
18:46:56.244 INFO TaskSetManager - Starting task 0.0 in stage 273.0 (TID 341) (localhost, executor driver, partition 0, PROCESS_LOCAL, 8303 bytes)
18:46:56.244 INFO TaskSetManager - Starting task 1.0 in stage 273.0 (TID 342) (localhost, executor driver, partition 1, PROCESS_LOCAL, 8292 bytes)
18:46:56.244 INFO TaskSetManager - Starting task 2.0 in stage 273.0 (TID 343) (localhost, executor driver, partition 2, PROCESS_LOCAL, 8292 bytes)
18:46:56.244 INFO TaskSetManager - Starting task 3.0 in stage 273.0 (TID 344) (localhost, executor driver, partition 3, PROCESS_LOCAL, 8184 bytes)
18:46:56.245 INFO Executor - Running task 3.0 in stage 273.0 (TID 344)
18:46:56.245 INFO Executor - Running task 0.0 in stage 273.0 (TID 341)
18:46:56.245 INFO Executor - Running task 1.0 in stage 273.0 (TID 342)
18:46:56.245 INFO Executor - Running task 2.0 in stage 273.0 (TID 343)
18:46:56.246 INFO Executor - Finished task 0.0 in stage 273.0 (TID 341). 1535 bytes result sent to driver
18:46:56.246 INFO Executor - Finished task 3.0 in stage 273.0 (TID 344). 1416 bytes result sent to driver
18:46:56.251 INFO Executor - Finished task 1.0 in stage 273.0 (TID 342). 1610 bytes result sent to driver
18:46:56.251 INFO Executor - Finished task 2.0 in stage 273.0 (TID 343). 1610 bytes result sent to driver
18:46:56.251 INFO TaskSetManager - Finished task 3.0 in stage 273.0 (TID 344) in 7 ms on localhost (executor driver) (1/4)
18:46:56.251 INFO TaskSetManager - Finished task 0.0 in stage 273.0 (TID 341) in 7 ms on localhost (executor driver) (2/4)
18:46:56.251 INFO TaskSetManager - Finished task 1.0 in stage 273.0 (TID 342) in 7 ms on localhost (executor driver) (3/4)
18:46:56.251 INFO TaskSetManager - Finished task 2.0 in stage 273.0 (TID 343) in 7 ms on localhost (executor driver) (4/4)
18:46:56.251 INFO TaskSchedulerImpl - Removed TaskSet 273.0, whose tasks have all completed, from pool
18:46:56.251 INFO BlockManagerInfo - Removed broadcast_555_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.5 MiB)
18:46:56.252 INFO DAGScheduler - ResultStage 273 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.009 s
18:46:56.252 INFO DAGScheduler - Job 209 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:56.252 INFO TaskSchedulerImpl - Killing all running tasks in stage 273: Stage finished
18:46:56.252 INFO DAGScheduler - Job 209 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.009652 s
18:46:56.252 INFO BlockManagerInfo - Removed broadcast_552_piece0 on localhost:45727 in memory (size: 1778.0 B, free: 1919.5 MiB)
18:46:56.252 INFO BlockManagerInfo - Removed broadcast_554_piece0 on localhost:45727 in memory (size: 206.0 B, free: 1919.5 MiB)
18:46:56.253 INFO FileInputFormat - Total input files to process : 1
18:46:56.253 INFO BlockManagerInfo - Removed broadcast_553_piece0 on localhost:45727 in memory (size: 103.6 KiB, free: 1919.6 MiB)
18:46:56.253 INFO BlockManagerInfo - Removed broadcast_546_piece0 on localhost:45727 in memory (size: 1778.0 B, free: 1919.7 MiB)
18:46:56.254 INFO BlockManagerInfo - Removed broadcast_547_piece0 on localhost:45727 in memory (size: 153.7 KiB, free: 1919.8 MiB)
18:46:56.254 INFO BlockManagerInfo - Removed broadcast_548_piece0 on localhost:45727 in memory (size: 187.0 B, free: 1919.8 MiB)
18:46:56.254 INFO BlockManagerInfo - Removed broadcast_549_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.9 MiB)
18:46:56.255 INFO BlockManagerInfo - Removed broadcast_551_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.9 MiB)
18:46:56.255 INFO BlockManagerInfo - Removed broadcast_545_piece0 on localhost:45727 in memory (size: 50.3 KiB, free: 1919.9 MiB)
18:46:56.255 INFO BlockManagerInfo - Removed broadcast_550_piece0 on localhost:45727 in memory (size: 187.0 B, free: 1919.9 MiB)
18:46:56.279 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
18:46:56.279 INFO DAGScheduler - Got job 210 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
18:46:56.279 INFO DAGScheduler - Final stage: ResultStage 274 (collect at ReadsSparkSourceUnitTest.java:112)
18:46:56.279 INFO DAGScheduler - Parents of final stage: List()
18:46:56.279 INFO DAGScheduler - Missing parents: List()
18:46:56.279 INFO DAGScheduler - Submitting ResultStage 274 (MapPartitionsRDD[1325] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:56.290 INFO MemoryStore - Block broadcast_559 stored as values in memory (estimated size 286.8 KiB, free 1919.4 MiB)
18:46:56.291 INFO MemoryStore - Block broadcast_559_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1919.3 MiB)
18:46:56.291 INFO BlockManagerInfo - Added broadcast_559_piece0 in memory on localhost:45727 (size: 103.6 KiB, free: 1919.8 MiB)
18:46:56.292 INFO SparkContext - Created broadcast 559 from broadcast at DAGScheduler.scala:1580
18:46:56.292 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 274 (MapPartitionsRDD[1325] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:56.292 INFO TaskSchedulerImpl - Adding task set 274.0 with 1 tasks resource profile 0
18:46:56.292 INFO TaskSetManager - Starting task 0.0 in stage 274.0 (TID 345) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7851 bytes)
18:46:56.292 INFO Executor - Running task 0.0 in stage 274.0 (TID 345)
18:46:56.313 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/valid.cram:0+13330
18:46:56.317 INFO Executor - Finished task 0.0 in stage 274.0 (TID 345). 3736 bytes result sent to driver
18:46:56.317 INFO TaskSetManager - Finished task 0.0 in stage 274.0 (TID 345) in 25 ms on localhost (executor driver) (1/1)
18:46:56.317 INFO TaskSchedulerImpl - Removed TaskSet 274.0, whose tasks have all completed, from pool
18:46:56.317 INFO DAGScheduler - ResultStage 274 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.038 s
18:46:56.317 INFO DAGScheduler - Job 210 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:56.317 INFO TaskSchedulerImpl - Killing all running tasks in stage 274: Stage finished
18:46:56.317 INFO DAGScheduler - Job 210 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.038637 s
18:46:56.320 INFO MemoryStore - Block broadcast_560 stored as values in memory (estimated size 297.9 KiB, free 1919.0 MiB)
18:46:56.327 INFO MemoryStore - Block broadcast_560_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.9 MiB)
18:46:56.327 INFO BlockManagerInfo - Added broadcast_560_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.8 MiB)
18:46:56.328 INFO SparkContext - Created broadcast 560 from newAPIHadoopFile at PathSplitSource.java:96
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
18:46:56.352 INFO MemoryStore - Block broadcast_561 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
18:46:56.358 INFO MemoryStore - Block broadcast_561_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
18:46:56.359 INFO BlockManagerInfo - Added broadcast_561_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.7 MiB)
18:46:56.359 INFO SparkContext - Created broadcast 561 from newAPIHadoopFile at PathSplitSource.java:96
18:46:56.381 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
18:46:56.382 INFO DAGScheduler - Got job 211 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
18:46:56.382 INFO DAGScheduler - Final stage: ResultStage 275 (collect at ReadsSparkSourceUnitTest.java:111)
18:46:56.382 INFO DAGScheduler - Parents of final stage: List()
18:46:56.382 INFO DAGScheduler - Missing parents: List()
18:46:56.382 INFO DAGScheduler - Submitting ResultStage 275 (ParallelCollectionRDD[1331] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
18:46:56.382 INFO MemoryStore - Block broadcast_562 stored as values in memory (estimated size 3.0 KiB, free 1918.6 MiB)
18:46:56.383 INFO MemoryStore - Block broadcast_562_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.6 MiB)
18:46:56.383 INFO BlockManagerInfo - Added broadcast_562_piece0 in memory on localhost:45727 (size: 1778.0 B, free: 1919.7 MiB)
18:46:56.383 INFO SparkContext - Created broadcast 562 from broadcast at DAGScheduler.scala:1580
18:46:56.383 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 275 (ParallelCollectionRDD[1331] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
18:46:56.383 INFO TaskSchedulerImpl - Adding task set 275.0 with 4 tasks resource profile 0
18:46:56.383 INFO TaskSetManager - Starting task 0.0 in stage 275.0 (TID 346) (localhost, executor driver, partition 0, PROCESS_LOCAL, 47080 bytes)
18:46:56.384 INFO TaskSetManager - Starting task 1.0 in stage 275.0 (TID 347) (localhost, executor driver, partition 1, PROCESS_LOCAL, 47194 bytes)
18:46:56.384 INFO TaskSetManager - Starting task 2.0 in stage 275.0 (TID 348) (localhost, executor driver, partition 2, PROCESS_LOCAL, 47105 bytes)
18:46:56.384 INFO TaskSetManager - Starting task 3.0 in stage 275.0 (TID 349) (localhost, executor driver, partition 3, PROCESS_LOCAL, 47530 bytes)
18:46:56.384 INFO Executor - Running task 0.0 in stage 275.0 (TID 346)
18:46:56.384 INFO Executor - Running task 1.0 in stage 275.0 (TID 347)
18:46:56.384 INFO Executor - Running task 2.0 in stage 275.0 (TID 348)
18:46:56.384 INFO Executor - Running task 3.0 in stage 275.0 (TID 349)
18:46:56.386 INFO Executor - Finished task 0.0 in stage 275.0 (TID 346). 40312 bytes result sent to driver
18:46:56.386 INFO Executor - Finished task 1.0 in stage 275.0 (TID 347). 40426 bytes result sent to driver
18:46:56.386 INFO Executor - Finished task 3.0 in stage 275.0 (TID 349). 40762 bytes result sent to driver
18:46:56.386 INFO Executor - Finished task 2.0 in stage 275.0 (TID 348). 40380 bytes result sent to driver
18:46:56.386 INFO TaskSetManager - Finished task 1.0 in stage 275.0 (TID 347) in 3 ms on localhost (executor driver) (1/4)
18:46:56.386 INFO TaskSetManager - Finished task 0.0 in stage 275.0 (TID 346) in 3 ms on localhost (executor driver) (2/4)
18:46:56.386 INFO TaskSetManager - Finished task 3.0 in stage 275.0 (TID 349) in 2 ms on localhost (executor driver) (3/4)
18:46:56.387 INFO TaskSetManager - Finished task 2.0 in stage 275.0 (TID 348) in 3 ms on localhost (executor driver) (4/4)
18:46:56.387 INFO TaskSchedulerImpl - Removed TaskSet 275.0, whose tasks have all completed, from pool
18:46:56.387 INFO DAGScheduler - ResultStage 275 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.005 s
18:46:56.387 INFO DAGScheduler - Job 211 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:56.387 INFO TaskSchedulerImpl - Killing all running tasks in stage 275: Stage finished
18:46:56.387 INFO DAGScheduler - Job 211 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.005360 s
18:46:56.387 INFO FileInputFormat - Total input files to process : 1
18:46:56.423 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
18:46:56.423 INFO DAGScheduler - Got job 212 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
18:46:56.423 INFO DAGScheduler - Final stage: ResultStage 276 (collect at ReadsSparkSourceUnitTest.java:112)
18:46:56.423 INFO DAGScheduler - Parents of final stage: List()
18:46:56.423 INFO DAGScheduler - Missing parents: List()
18:46:56.423 INFO DAGScheduler - Submitting ResultStage 276 (MapPartitionsRDD[1338] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:56.440 INFO MemoryStore - Block broadcast_563 stored as values in memory (estimated size 426.2 KiB, free 1918.2 MiB)
18:46:56.441 INFO MemoryStore - Block broadcast_563_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1918.0 MiB)
18:46:56.441 INFO BlockManagerInfo - Added broadcast_563_piece0 in memory on localhost:45727 (size: 153.7 KiB, free: 1919.6 MiB)
18:46:56.441 INFO SparkContext - Created broadcast 563 from broadcast at DAGScheduler.scala:1580
18:46:56.441 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 276 (MapPartitionsRDD[1338] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:56.442 INFO TaskSchedulerImpl - Adding task set 276.0 with 1 tasks resource profile 0
18:46:56.442 INFO TaskSetManager - Starting task 0.0 in stage 276.0 (TID 350) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes)
18:46:56.442 INFO Executor - Running task 0.0 in stage 276.0 (TID 350)
18:46:56.471 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 130, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 275, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 419, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 440, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
18:46:56.476 INFO Executor - Finished task 0.0 in stage 276.0 (TID 350). 159572 bytes result sent to driver
18:46:56.476 INFO TaskSetManager - Finished task 0.0 in stage 276.0 (TID 350) in 34 ms on localhost (executor driver) (1/1)
18:46:56.476 INFO TaskSchedulerImpl - Removed TaskSet 276.0, whose tasks have all completed, from pool
18:46:56.476 INFO DAGScheduler - ResultStage 276 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.053 s
18:46:56.476 INFO DAGScheduler - Job 212 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:56.476 INFO TaskSchedulerImpl - Killing all running tasks in stage 276: Stage finished
18:46:56.477 INFO DAGScheduler - Job 212 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.053843 s
18:46:56.479 INFO MemoryStore - Block broadcast_564 stored as values in memory (estimated size 536.0 B, free 1918.0 MiB)
18:46:56.479 INFO MemoryStore - Block broadcast_564_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.0 MiB)
18:46:56.479 INFO BlockManagerInfo - Added broadcast_564_piece0 in memory on localhost:45727 (size: 187.0 B, free: 1919.6 MiB)
18:46:56.479 INFO SparkContext - Created broadcast 564 from broadcast at CramSource.java:114
18:46:56.480 INFO MemoryStore - Block broadcast_565 stored as values in memory (estimated size 297.9 KiB, free 1917.7 MiB)
18:46:56.492 INFO MemoryStore - Block broadcast_565_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.7 MiB)
18:46:56.492 INFO BlockManagerInfo - Added broadcast_565_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.5 MiB)
18:46:56.492 INFO SparkContext - Created broadcast 565 from newAPIHadoopFile at PathSplitSource.java:96
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
18:46:56.523 INFO MemoryStore - Block broadcast_566 stored as values in memory (estimated size 536.0 B, free 1917.7 MiB)
18:46:56.523 INFO MemoryStore - Block broadcast_566_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.7 MiB)
18:46:56.523 INFO BlockManagerInfo - Added broadcast_566_piece0 in memory on localhost:45727 (size: 187.0 B, free: 1919.5 MiB)
18:46:56.523 INFO SparkContext - Created broadcast 566 from broadcast at CramSource.java:114
18:46:56.524 INFO MemoryStore - Block broadcast_567 stored as values in memory (estimated size 297.9 KiB, free 1917.4 MiB)
18:46:56.530 INFO MemoryStore - Block broadcast_567_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.3 MiB)
18:46:56.530 INFO BlockManagerInfo - Added broadcast_567_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.5 MiB)
18:46:56.530 INFO SparkContext - Created broadcast 567 from newAPIHadoopFile at PathSplitSource.java:96
18:46:56.547 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
18:46:56.547 INFO DAGScheduler - Got job 213 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
18:46:56.547 INFO DAGScheduler - Final stage: ResultStage 277 (collect at ReadsSparkSourceUnitTest.java:111)
18:46:56.547 INFO DAGScheduler - Parents of final stage: List()
18:46:56.547 INFO DAGScheduler - Missing parents: List()
18:46:56.547 INFO DAGScheduler - Submitting ResultStage 277 (ParallelCollectionRDD[1343] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
18:46:56.547 INFO MemoryStore - Block broadcast_568 stored as values in memory (estimated size 3.0 KiB, free 1917.3 MiB)
18:46:56.548 INFO MemoryStore - Block broadcast_568_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1917.3 MiB)
18:46:56.548 INFO BlockManagerInfo - Added broadcast_568_piece0 in memory on localhost:45727 (size: 1778.0 B, free: 1919.5 MiB)
18:46:56.548 INFO SparkContext - Created broadcast 568 from broadcast at DAGScheduler.scala:1580
18:46:56.548 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 277 (ParallelCollectionRDD[1343] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
18:46:56.548 INFO TaskSchedulerImpl - Adding task set 277.0 with 4 tasks resource profile 0
18:46:56.549 INFO TaskSetManager - Starting task 0.0 in stage 277.0 (TID 351) (localhost, executor driver, partition 0, PROCESS_LOCAL, 45675 bytes)
18:46:56.550 INFO TaskSetManager - Starting task 1.0 in stage 277.0 (TID 352) (localhost, executor driver, partition 1, PROCESS_LOCAL, 45855 bytes)
18:46:56.550 INFO TaskSetManager - Starting task 2.0 in stage 277.0 (TID 353) (localhost, executor driver, partition 2, PROCESS_LOCAL, 45795 bytes)
18:46:56.550 INFO TaskSetManager - Starting task 3.0 in stage 277.0 (TID 354) (localhost, executor driver, partition 3, PROCESS_LOCAL, 46068 bytes)
18:46:56.551 INFO Executor - Running task 3.0 in stage 277.0 (TID 354)
18:46:56.551 INFO Executor - Running task 0.0 in stage 277.0 (TID 351)
18:46:56.551 INFO Executor - Running task 2.0 in stage 277.0 (TID 353)
18:46:56.551 INFO Executor - Running task 1.0 in stage 277.0 (TID 352)
18:46:56.552 INFO Executor - Finished task 2.0 in stage 277.0 (TID 353). 39027 bytes result sent to driver
18:46:56.552 INFO Executor - Finished task 0.0 in stage 277.0 (TID 351). 38950 bytes result sent to driver
18:46:56.552 INFO Executor - Finished task 3.0 in stage 277.0 (TID 354). 39300 bytes result sent to driver
18:46:56.553 INFO Executor - Finished task 1.0 in stage 277.0 (TID 352). 39130 bytes result sent to driver
18:46:56.553 INFO TaskSetManager - Finished task 0.0 in stage 277.0 (TID 351) in 4 ms on localhost (executor driver) (1/4)
18:46:56.554 INFO TaskSetManager - Finished task 1.0 in stage 277.0 (TID 352) in 5 ms on localhost (executor driver) (2/4)
18:46:56.554 INFO TaskSetManager - Finished task 2.0 in stage 277.0 (TID 353) in 4 ms on localhost (executor driver) (3/4)
18:46:56.554 INFO TaskSetManager - Finished task 3.0 in stage 277.0 (TID 354) in 4 ms on localhost (executor driver) (4/4)
18:46:56.554 INFO TaskSchedulerImpl - Removed TaskSet 277.0, whose tasks have all completed, from pool
18:46:56.554 INFO DAGScheduler - ResultStage 277 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.007 s
18:46:56.554 INFO DAGScheduler - Job 213 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:56.554 INFO TaskSchedulerImpl - Killing all running tasks in stage 277: Stage finished
18:46:56.554 INFO DAGScheduler - Job 213 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.007520 s
18:46:56.555 INFO FileInputFormat - Total input files to process : 1
18:46:56.580 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
18:46:56.580 INFO DAGScheduler - Got job 214 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
18:46:56.580 INFO DAGScheduler - Final stage: ResultStage 278 (collect at ReadsSparkSourceUnitTest.java:112)
18:46:56.580 INFO DAGScheduler - Parents of final stage: List()
18:46:56.580 INFO DAGScheduler - Missing parents: List()
18:46:56.580 INFO DAGScheduler - Submitting ResultStage 278 (MapPartitionsRDD[1349] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:56.600 INFO MemoryStore - Block broadcast_569 stored as values in memory (estimated size 286.8 KiB, free 1917.1 MiB)
18:46:56.601 INFO MemoryStore - Block broadcast_569_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1917.0 MiB)
18:46:56.601 INFO BlockManagerInfo - Added broadcast_569_piece0 in memory on localhost:45727 (size: 103.6 KiB, free: 1919.4 MiB)
18:46:56.601 INFO SparkContext - Created broadcast 569 from broadcast at DAGScheduler.scala:1580
18:46:56.601 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 278 (MapPartitionsRDD[1349] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:56.601 INFO TaskSchedulerImpl - Adding task set 278.0 with 1 tasks resource profile 0
18:46:56.602 INFO TaskSetManager - Starting task 0.0 in stage 278.0 (TID 355) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes)
18:46:56.602 INFO Executor - Running task 0.0 in stage 278.0 (TID 355)
18:46:56.623 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 129, Read name 809R9ABXX101220:5:6:17918:145992, The unaligned mate start position is 69400, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 274, Read name 809R9ABXX101220:5:8:7119:101555, The unaligned mate start position is 69471, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 418, Read name 809R9ABXX101220:5:48:20204:50350, The unaligned mate start position is 69569, should be 0
Ignoring SAM validation error: ERROR::INVALID_ALIGNMENT_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mate Alignment start should be 0 because reference name = *.
Ignoring SAM validation error: ERROR::INVALID_FLAG_MATE_UNMAPPED:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, Mapped mate should have mate reference name
Ignoring SAM validation error: ERROR::INVALID_UNALIGNED_MATE_START:Record 439, Read name 809R9ABXX101220:5:44:13776:49360, The unaligned mate start position is 69592, should be 0
18:46:56.631 INFO Executor - Finished task 0.0 in stage 278.0 (TID 355). 154058 bytes result sent to driver
18:46:56.632 INFO TaskSetManager - Finished task 0.0 in stage 278.0 (TID 355) in 29 ms on localhost (executor driver) (1/1)
18:46:56.632 INFO TaskSchedulerImpl - Removed TaskSet 278.0, whose tasks have all completed, from pool
18:46:56.632 INFO DAGScheduler - ResultStage 278 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.052 s
18:46:56.632 INFO DAGScheduler - Job 214 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:56.632 INFO TaskSchedulerImpl - Killing all running tasks in stage 278: Stage finished
18:46:56.632 INFO DAGScheduler - Job 214 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.052011 s
18:46:56.634 INFO MemoryStore - Block broadcast_570 stored as values in memory (estimated size 297.9 KiB, free 1916.7 MiB)
18:46:56.642 INFO BlockManagerInfo - Removed broadcast_562_piece0 on localhost:45727 in memory (size: 1778.0 B, free: 1919.4 MiB)
18:46:56.643 INFO BlockManagerInfo - Removed broadcast_560_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.4 MiB)
18:46:56.643 INFO BlockManagerInfo - Removed broadcast_564_piece0 on localhost:45727 in memory (size: 187.0 B, free: 1919.4 MiB)
18:46:56.644 INFO BlockManagerInfo - Removed broadcast_558_piece0 on localhost:45727 in memory (size: 1778.0 B, free: 1919.4 MiB)
18:46:56.644 INFO BlockManagerInfo - Removed broadcast_569_piece0 on localhost:45727 in memory (size: 103.6 KiB, free: 1919.6 MiB)
18:46:56.644 INFO BlockManagerInfo - Removed broadcast_563_piece0 on localhost:45727 in memory (size: 153.7 KiB, free: 1919.7 MiB)
18:46:56.645 INFO BlockManagerInfo - Removed broadcast_559_piece0 on localhost:45727 in memory (size: 103.6 KiB, free: 1919.8 MiB)
18:46:56.645 INFO BlockManagerInfo - Removed broadcast_568_piece0 on localhost:45727 in memory (size: 1778.0 B, free: 1919.8 MiB)
18:46:56.645 INFO BlockManagerInfo - Removed broadcast_567_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.9 MiB)
18:46:56.646 INFO BlockManagerInfo - Removed broadcast_557_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.9 MiB)
18:46:56.646 INFO BlockManagerInfo - Removed broadcast_565_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1920.0 MiB)
18:46:56.646 INFO BlockManagerInfo - Removed broadcast_556_piece0 on localhost:45727 in memory (size: 206.0 B, free: 1920.0 MiB)
18:46:56.647 INFO BlockManagerInfo - Removed broadcast_566_piece0 on localhost:45727 in memory (size: 187.0 B, free: 1920.0 MiB)
18:46:56.648 INFO BlockManagerInfo - Removed broadcast_561_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1920.0 MiB)
18:46:56.649 INFO MemoryStore - Block broadcast_570_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
18:46:56.649 INFO BlockManagerInfo - Added broadcast_570_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1920.0 MiB)
18:46:56.649 INFO SparkContext - Created broadcast 570 from newAPIHadoopFile at PathSplitSource.java:96
18:46:56.671 INFO MemoryStore - Block broadcast_571 stored as values in memory (estimated size 297.9 KiB, free 1919.4 MiB)
18:46:56.677 INFO MemoryStore - Block broadcast_571_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.3 MiB)
18:46:56.677 INFO BlockManagerInfo - Added broadcast_571_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.9 MiB)
18:46:56.677 INFO SparkContext - Created broadcast 571 from newAPIHadoopFile at PathSplitSource.java:96
18:46:56.700 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
18:46:56.700 INFO DAGScheduler - Got job 215 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
18:46:56.700 INFO DAGScheduler - Final stage: ResultStage 279 (collect at ReadsSparkSourceUnitTest.java:111)
18:46:56.700 INFO DAGScheduler - Parents of final stage: List()
18:46:56.700 INFO DAGScheduler - Missing parents: List()
18:46:56.700 INFO DAGScheduler - Submitting ResultStage 279 (ParallelCollectionRDD[1355] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
18:46:56.701 INFO MemoryStore - Block broadcast_572 stored as values in memory (estimated size 3.0 KiB, free 1919.3 MiB)
18:46:56.701 INFO MemoryStore - Block broadcast_572_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1919.3 MiB)
18:46:56.701 INFO BlockManagerInfo - Added broadcast_572_piece0 in memory on localhost:45727 (size: 1778.0 B, free: 1919.9 MiB)
18:46:56.701 INFO SparkContext - Created broadcast 572 from broadcast at DAGScheduler.scala:1580
18:46:56.701 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 279 (ParallelCollectionRDD[1355] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
18:46:56.701 INFO TaskSchedulerImpl - Adding task set 279.0 with 4 tasks resource profile 0
18:46:56.702 INFO TaskSetManager - Starting task 0.0 in stage 279.0 (TID 356) (localhost, executor driver, partition 0, PROCESS_LOCAL, 47080 bytes)
18:46:56.702 INFO TaskSetManager - Starting task 1.0 in stage 279.0 (TID 357) (localhost, executor driver, partition 1, PROCESS_LOCAL, 47194 bytes)
18:46:56.702 INFO TaskSetManager - Starting task 2.0 in stage 279.0 (TID 358) (localhost, executor driver, partition 2, PROCESS_LOCAL, 47105 bytes)
18:46:56.703 INFO TaskSetManager - Starting task 3.0 in stage 279.0 (TID 359) (localhost, executor driver, partition 3, PROCESS_LOCAL, 47530 bytes)
18:46:56.703 INFO Executor - Running task 0.0 in stage 279.0 (TID 356)
18:46:56.703 INFO Executor - Running task 3.0 in stage 279.0 (TID 359)
18:46:56.703 INFO Executor - Running task 1.0 in stage 279.0 (TID 357)
18:46:56.703 INFO Executor - Running task 2.0 in stage 279.0 (TID 358)
18:46:56.704 INFO Executor - Finished task 0.0 in stage 279.0 (TID 356). 40312 bytes result sent to driver
18:46:56.704 INFO Executor - Finished task 1.0 in stage 279.0 (TID 357). 40426 bytes result sent to driver
18:46:56.704 INFO Executor - Finished task 2.0 in stage 279.0 (TID 358). 40380 bytes result sent to driver
18:46:56.705 INFO TaskSetManager - Finished task 0.0 in stage 279.0 (TID 356) in 3 ms on localhost (executor driver) (1/4)
18:46:56.705 INFO TaskSetManager - Finished task 1.0 in stage 279.0 (TID 357) in 3 ms on localhost (executor driver) (2/4)
18:46:56.705 INFO Executor - Finished task 3.0 in stage 279.0 (TID 359). 40805 bytes result sent to driver
18:46:56.705 INFO TaskSetManager - Finished task 2.0 in stage 279.0 (TID 358) in 3 ms on localhost (executor driver) (3/4)
18:46:56.706 INFO TaskSetManager - Finished task 3.0 in stage 279.0 (TID 359) in 4 ms on localhost (executor driver) (4/4)
18:46:56.706 INFO TaskSchedulerImpl - Removed TaskSet 279.0, whose tasks have all completed, from pool
18:46:56.706 INFO DAGScheduler - ResultStage 279 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.006 s
18:46:56.706 INFO DAGScheduler - Job 215 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:56.706 INFO TaskSchedulerImpl - Killing all running tasks in stage 279: Stage finished
18:46:56.706 INFO DAGScheduler - Job 215 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.005920 s
18:46:56.706 INFO FileInputFormat - Total input files to process : 1
18:46:56.742 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
18:46:56.743 INFO DAGScheduler - Got job 216 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
18:46:56.743 INFO DAGScheduler - Final stage: ResultStage 280 (collect at ReadsSparkSourceUnitTest.java:112)
18:46:56.743 INFO DAGScheduler - Parents of final stage: List()
18:46:56.743 INFO DAGScheduler - Missing parents: List()
18:46:56.743 INFO DAGScheduler - Submitting ResultStage 280 (MapPartitionsRDD[1362] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:56.759 INFO MemoryStore - Block broadcast_573 stored as values in memory (estimated size 426.2 KiB, free 1918.9 MiB)
18:46:56.761 INFO MemoryStore - Block broadcast_573_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1918.7 MiB)
18:46:56.761 INFO BlockManagerInfo - Added broadcast_573_piece0 in memory on localhost:45727 (size: 153.7 KiB, free: 1919.8 MiB)
18:46:56.761 INFO SparkContext - Created broadcast 573 from broadcast at DAGScheduler.scala:1580
18:46:56.761 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 280 (MapPartitionsRDD[1362] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:56.761 INFO TaskSchedulerImpl - Adding task set 280.0 with 1 tasks resource profile 0
18:46:56.761 INFO TaskSetManager - Starting task 0.0 in stage 280.0 (TID 360) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes)
18:46:56.762 INFO Executor - Running task 0.0 in stage 280.0 (TID 360)
18:46:56.791 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
18:46:56.795 INFO Executor - Finished task 0.0 in stage 280.0 (TID 360). 159572 bytes result sent to driver
18:46:56.796 INFO TaskSetManager - Finished task 0.0 in stage 280.0 (TID 360) in 35 ms on localhost (executor driver) (1/1)
18:46:56.796 INFO TaskSchedulerImpl - Removed TaskSet 280.0, whose tasks have all completed, from pool
18:46:56.796 INFO DAGScheduler - ResultStage 280 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.053 s
18:46:56.796 INFO DAGScheduler - Job 216 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:56.796 INFO TaskSchedulerImpl - Killing all running tasks in stage 280: Stage finished
18:46:56.796 INFO DAGScheduler - Job 216 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.053771 s
18:46:56.799 INFO MemoryStore - Block broadcast_574 stored as values in memory (estimated size 536.0 B, free 1918.7 MiB)
18:46:56.799 INFO MemoryStore - Block broadcast_574_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.7 MiB)
18:46:56.799 INFO BlockManagerInfo - Added broadcast_574_piece0 in memory on localhost:45727 (size: 187.0 B, free: 1919.7 MiB)
18:46:56.799 INFO SparkContext - Created broadcast 574 from broadcast at CramSource.java:114
18:46:56.800 INFO MemoryStore - Block broadcast_575 stored as values in memory (estimated size 297.9 KiB, free 1918.5 MiB)
18:46:56.807 INFO MemoryStore - Block broadcast_575_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.4 MiB)
18:46:56.807 INFO BlockManagerInfo - Added broadcast_575_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.7 MiB)
18:46:56.807 INFO SparkContext - Created broadcast 575 from newAPIHadoopFile at PathSplitSource.java:96
18:46:56.827 INFO MemoryStore - Block broadcast_576 stored as values in memory (estimated size 536.0 B, free 1918.4 MiB)
18:46:56.827 INFO MemoryStore - Block broadcast_576_piece0 stored as bytes in memory (estimated size 187.0 B, free 1918.4 MiB)
18:46:56.828 INFO BlockManagerInfo - Added broadcast_576_piece0 in memory on localhost:45727 (size: 187.0 B, free: 1919.7 MiB)
18:46:56.828 INFO SparkContext - Created broadcast 576 from broadcast at CramSource.java:114
18:46:56.828 INFO MemoryStore - Block broadcast_577 stored as values in memory (estimated size 297.9 KiB, free 1918.1 MiB)
18:46:56.834 INFO MemoryStore - Block broadcast_577_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.1 MiB)
18:46:56.834 INFO BlockManagerInfo - Added broadcast_577_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.7 MiB)
18:46:56.835 INFO SparkContext - Created broadcast 577 from newAPIHadoopFile at PathSplitSource.java:96
18:46:56.852 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:111
18:46:56.852 INFO DAGScheduler - Got job 217 (collect at ReadsSparkSourceUnitTest.java:111) with 4 output partitions
18:46:56.852 INFO DAGScheduler - Final stage: ResultStage 281 (collect at ReadsSparkSourceUnitTest.java:111)
18:46:56.852 INFO DAGScheduler - Parents of final stage: List()
18:46:56.852 INFO DAGScheduler - Missing parents: List()
18:46:56.852 INFO DAGScheduler - Submitting ResultStage 281 (ParallelCollectionRDD[1367] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
18:46:56.852 INFO MemoryStore - Block broadcast_578 stored as values in memory (estimated size 3.0 KiB, free 1918.1 MiB)
18:46:56.853 INFO MemoryStore - Block broadcast_578_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1918.1 MiB)
18:46:56.853 INFO BlockManagerInfo - Added broadcast_578_piece0 in memory on localhost:45727 (size: 1778.0 B, free: 1919.7 MiB)
18:46:56.853 INFO SparkContext - Created broadcast 578 from broadcast at DAGScheduler.scala:1580
18:46:56.853 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 281 (ParallelCollectionRDD[1367] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
18:46:56.853 INFO TaskSchedulerImpl - Adding task set 281.0 with 4 tasks resource profile 0
18:46:56.854 INFO TaskSetManager - Starting task 0.0 in stage 281.0 (TID 361) (localhost, executor driver, partition 0, PROCESS_LOCAL, 45675 bytes)
18:46:56.854 INFO TaskSetManager - Starting task 1.0 in stage 281.0 (TID 362) (localhost, executor driver, partition 1, PROCESS_LOCAL, 45855 bytes)
18:46:56.854 INFO TaskSetManager - Starting task 2.0 in stage 281.0 (TID 363) (localhost, executor driver, partition 2, PROCESS_LOCAL, 45795 bytes)
18:46:56.855 INFO TaskSetManager - Starting task 3.0 in stage 281.0 (TID 364) (localhost, executor driver, partition 3, PROCESS_LOCAL, 46068 bytes)
18:46:56.855 INFO Executor - Running task 1.0 in stage 281.0 (TID 362)
18:46:56.855 INFO Executor - Running task 0.0 in stage 281.0 (TID 361)
18:46:56.855 INFO Executor - Running task 3.0 in stage 281.0 (TID 364)
18:46:56.855 INFO Executor - Running task 2.0 in stage 281.0 (TID 363)
18:46:56.856 INFO Executor - Finished task 2.0 in stage 281.0 (TID 363). 39027 bytes result sent to driver
18:46:56.856 INFO Executor - Finished task 3.0 in stage 281.0 (TID 364). 39300 bytes result sent to driver
18:46:56.856 INFO Executor - Finished task 0.0 in stage 281.0 (TID 361). 38950 bytes result sent to driver
18:46:56.857 INFO Executor - Finished task 1.0 in stage 281.0 (TID 362). 39087 bytes result sent to driver
18:46:56.857 INFO TaskSetManager - Finished task 2.0 in stage 281.0 (TID 363) in 3 ms on localhost (executor driver) (1/4)
18:46:56.857 INFO TaskSetManager - Finished task 0.0 in stage 281.0 (TID 361) in 4 ms on localhost (executor driver) (2/4)
18:46:56.857 INFO TaskSetManager - Finished task 3.0 in stage 281.0 (TID 364) in 3 ms on localhost (executor driver) (3/4)
18:46:56.857 INFO TaskSetManager - Finished task 1.0 in stage 281.0 (TID 362) in 3 ms on localhost (executor driver) (4/4)
18:46:56.857 INFO TaskSchedulerImpl - Removed TaskSet 281.0, whose tasks have all completed, from pool
18:46:56.858 INFO DAGScheduler - ResultStage 281 (collect at ReadsSparkSourceUnitTest.java:111) finished in 0.006 s
18:46:56.858 INFO DAGScheduler - Job 217 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:56.858 INFO TaskSchedulerImpl - Killing all running tasks in stage 281: Stage finished
18:46:56.858 INFO DAGScheduler - Job 217 finished: collect at ReadsSparkSourceUnitTest.java:111, took 0.006101 s
18:46:56.858 INFO FileInputFormat - Total input files to process : 1
18:46:56.883 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:112
18:46:56.884 INFO DAGScheduler - Got job 218 (collect at ReadsSparkSourceUnitTest.java:112) with 1 output partitions
18:46:56.884 INFO DAGScheduler - Final stage: ResultStage 282 (collect at ReadsSparkSourceUnitTest.java:112)
18:46:56.884 INFO DAGScheduler - Parents of final stage: List()
18:46:56.884 INFO DAGScheduler - Missing parents: List()
18:46:56.884 INFO DAGScheduler - Submitting ResultStage 282 (MapPartitionsRDD[1373] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:56.895 INFO MemoryStore - Block broadcast_579 stored as values in memory (estimated size 286.8 KiB, free 1917.8 MiB)
18:46:56.896 INFO MemoryStore - Block broadcast_579_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1917.7 MiB)
18:46:56.896 INFO BlockManagerInfo - Added broadcast_579_piece0 in memory on localhost:45727 (size: 103.6 KiB, free: 1919.5 MiB)
18:46:56.896 INFO SparkContext - Created broadcast 579 from broadcast at DAGScheduler.scala:1580
18:46:56.897 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 282 (MapPartitionsRDD[1373] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:56.897 INFO TaskSchedulerImpl - Adding task set 282.0 with 1 tasks resource profile 0
18:46:56.897 INFO TaskSetManager - Starting task 0.0 in stage 282.0 (TID 365) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes)
18:46:56.897 INFO Executor - Running task 0.0 in stage 282.0 (TID 365)
18:46:56.917 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
18:46:56.925 INFO Executor - Finished task 0.0 in stage 282.0 (TID 365). 154058 bytes result sent to driver
18:46:56.925 INFO TaskSetManager - Finished task 0.0 in stage 282.0 (TID 365) in 28 ms on localhost (executor driver) (1/1)
18:46:56.925 INFO TaskSchedulerImpl - Removed TaskSet 282.0, whose tasks have all completed, from pool
18:46:56.925 INFO DAGScheduler - ResultStage 282 (collect at ReadsSparkSourceUnitTest.java:112) finished in 0.041 s
18:46:56.925 INFO DAGScheduler - Job 218 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:56.925 INFO TaskSchedulerImpl - Killing all running tasks in stage 282: Stage finished
18:46:56.926 INFO DAGScheduler - Job 218 finished: collect at ReadsSparkSourceUnitTest.java:112, took 0.042158 s
18:46:56.929 INFO MemoryStore - Block broadcast_580 stored as values in memory (estimated size 297.9 KiB, free 1917.4 MiB)
18:46:56.939 INFO MemoryStore - Block broadcast_580_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.3 MiB)
18:46:56.939 INFO BlockManagerInfo - Added broadcast_580_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.5 MiB)
18:46:56.939 INFO SparkContext - Created broadcast 580 from newAPIHadoopFile at PathSplitSource.java:96
18:46:56.961 INFO MemoryStore - Block broadcast_581 stored as values in memory (estimated size 536.0 B, free 1917.3 MiB)
18:46:56.966 INFO MemoryStore - Block broadcast_581_piece0 stored as bytes in memory (estimated size 187.0 B, free 1917.3 MiB)
18:46:56.966 INFO BlockManagerInfo - Added broadcast_581_piece0 in memory on localhost:45727 (size: 187.0 B, free: 1919.5 MiB)
18:46:56.966 INFO BlockManagerInfo - Removed broadcast_580_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.5 MiB)
18:46:56.966 INFO SparkContext - Created broadcast 581 from broadcast at CramSource.java:114
18:46:56.967 INFO BlockManagerInfo - Removed broadcast_578_piece0 on localhost:45727 in memory (size: 1778.0 B, free: 1919.6 MiB)
18:46:56.967 INFO BlockManagerInfo - Removed broadcast_570_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.6 MiB)
18:46:56.967 INFO BlockManagerInfo - Removed broadcast_576_piece0 on localhost:45727 in memory (size: 187.0 B, free: 1919.6 MiB)
18:46:56.968 INFO MemoryStore - Block broadcast_582 stored as values in memory (estimated size 297.9 KiB, free 1917.7 MiB)
18:46:56.968 INFO BlockManagerInfo - Removed broadcast_579_piece0 on localhost:45727 in memory (size: 103.6 KiB, free: 1919.7 MiB)
18:46:56.968 INFO BlockManagerInfo - Removed broadcast_574_piece0 on localhost:45727 in memory (size: 187.0 B, free: 1919.7 MiB)
18:46:56.968 INFO BlockManagerInfo - Removed broadcast_575_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.8 MiB)
18:46:56.969 INFO BlockManagerInfo - Removed broadcast_577_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.8 MiB)
18:46:56.969 INFO BlockManagerInfo - Removed broadcast_572_piece0 on localhost:45727 in memory (size: 1778.0 B, free: 1919.8 MiB)
18:46:56.969 INFO BlockManagerInfo - Removed broadcast_573_piece0 on localhost:45727 in memory (size: 153.7 KiB, free: 1920.0 MiB)
18:46:56.970 INFO BlockManagerInfo - Removed broadcast_571_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1920.0 MiB)
18:46:56.978 INFO MemoryStore - Block broadcast_582_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
18:46:56.978 INFO BlockManagerInfo - Added broadcast_582_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1920.0 MiB)
18:46:56.978 INFO SparkContext - Created broadcast 582 from newAPIHadoopFile at PathSplitSource.java:96
18:46:57.003 WARN FileSystem - Failed to initialize filesystem hdfs://bogus/path.bam: java.lang.IllegalArgumentException: java.net.UnknownHostException: bogus
18:46:57.005 INFO MemoryStore - Block broadcast_583 stored as values in memory (estimated size 298.0 KiB, free 1919.4 MiB)
18:46:57.011 INFO MemoryStore - Block broadcast_583_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.3 MiB)
18:46:57.012 INFO BlockManagerInfo - Added broadcast_583_piece0 in memory on localhost:45727 (size: 50.3 KiB, free: 1919.9 MiB)
18:46:57.012 INFO SparkContext - Created broadcast 583 from newAPIHadoopFile at PathSplitSource.java:96
18:46:57.036 INFO MemoryStore - Block broadcast_584 stored as values in memory (estimated size 298.0 KiB, free 1919.0 MiB)
18:46:57.043 INFO MemoryStore - Block broadcast_584_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.0 MiB)
18:46:57.043 INFO BlockManagerInfo - Added broadcast_584_piece0 in memory on localhost:45727 (size: 50.3 KiB, free: 1919.9 MiB)
18:46:57.043 INFO SparkContext - Created broadcast 584 from newAPIHadoopFile at PathSplitSource.java:96
18:46:57.066 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:154
18:46:57.066 INFO DAGScheduler - Got job 219 (collect at ReadsSparkSourceUnitTest.java:154) with 4 output partitions
18:46:57.066 INFO DAGScheduler - Final stage: ResultStage 283 (collect at ReadsSparkSourceUnitTest.java:154)
18:46:57.066 INFO DAGScheduler - Parents of final stage: List()
18:46:57.066 INFO DAGScheduler - Missing parents: List()
18:46:57.066 INFO DAGScheduler - Submitting ResultStage 283 (ParallelCollectionRDD[1388] at parallelize at ReadsSparkSourceUnitTest.java:310), which has no missing parents
18:46:57.067 INFO MemoryStore - Block broadcast_585 stored as values in memory (estimated size 3.0 KiB, free 1919.0 MiB)
18:46:57.067 INFO MemoryStore - Block broadcast_585_piece0 stored as bytes in memory (estimated size 1778.0 B, free 1919.0 MiB)
18:46:57.067 INFO BlockManagerInfo - Added broadcast_585_piece0 in memory on localhost:45727 (size: 1778.0 B, free: 1919.9 MiB)
18:46:57.067 INFO SparkContext - Created broadcast 585 from broadcast at DAGScheduler.scala:1580
18:46:57.068 INFO DAGScheduler - Submitting 4 missing tasks from ResultStage 283 (ParallelCollectionRDD[1388] at parallelize at ReadsSparkSourceUnitTest.java:310) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
18:46:57.068 INFO TaskSchedulerImpl - Adding task set 283.0 with 4 tasks resource profile 0
18:46:57.068 INFO TaskSetManager - Starting task 0.0 in stage 283.0 (TID 366) (localhost, executor driver, partition 0, PROCESS_LOCAL, 169886 bytes)
18:46:57.069 INFO TaskSetManager - Starting task 1.0 in stage 283.0 (TID 367) (localhost, executor driver, partition 1, PROCESS_LOCAL, 170093 bytes)
18:46:57.069 INFO TaskSetManager - Starting task 2.0 in stage 283.0 (TID 368) (localhost, executor driver, partition 2, PROCESS_LOCAL, 170081 bytes)
18:46:57.070 INFO TaskSetManager - Starting task 3.0 in stage 283.0 (TID 369) (localhost, executor driver, partition 3, PROCESS_LOCAL, 169419 bytes)
18:46:57.070 INFO Executor - Running task 3.0 in stage 283.0 (TID 369)
18:46:57.072 INFO Executor - Running task 1.0 in stage 283.0 (TID 367)
18:46:57.072 INFO Executor - Running task 0.0 in stage 283.0 (TID 366)
18:46:57.072 INFO Executor - Finished task 3.0 in stage 283.0 (TID 369). 162694 bytes result sent to driver
18:46:57.073 INFO Executor - Running task 2.0 in stage 283.0 (TID 368)
18:46:57.073 INFO TaskSetManager - Finished task 3.0 in stage 283.0 (TID 369) in 4 ms on localhost (executor driver) (1/4)
18:46:57.074 INFO Executor - Finished task 1.0 in stage 283.0 (TID 367). 163325 bytes result sent to driver
18:46:57.074 INFO TaskSetManager - Finished task 1.0 in stage 283.0 (TID 367) in 5 ms on localhost (executor driver) (2/4)
18:46:57.074 INFO Executor - Finished task 2.0 in stage 283.0 (TID 368). 163313 bytes result sent to driver
18:46:57.075 INFO Executor - Finished task 0.0 in stage 283.0 (TID 366). 163161 bytes result sent to driver
18:46:57.076 INFO TaskSetManager - Finished task 0.0 in stage 283.0 (TID 366) in 8 ms on localhost (executor driver) (3/4)
18:46:57.076 INFO TaskSetManager - Finished task 2.0 in stage 283.0 (TID 368) in 7 ms on localhost (executor driver) (4/4)
18:46:57.076 INFO TaskSchedulerImpl - Removed TaskSet 283.0, whose tasks have all completed, from pool
18:46:57.076 INFO DAGScheduler - ResultStage 283 (collect at ReadsSparkSourceUnitTest.java:154) finished in 0.009 s
18:46:57.077 INFO DAGScheduler - Job 219 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:57.077 INFO TaskSchedulerImpl - Killing all running tasks in stage 283: Stage finished
18:46:57.077 INFO DAGScheduler - Job 219 finished: collect at ReadsSparkSourceUnitTest.java:154, took 0.011099 s
18:46:57.088 INFO FileInputFormat - Total input files to process : 2
18:46:57.151 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:155
18:46:57.151 INFO DAGScheduler - Got job 220 (collect at ReadsSparkSourceUnitTest.java:155) with 2 output partitions
18:46:57.151 INFO DAGScheduler - Final stage: ResultStage 284 (collect at ReadsSparkSourceUnitTest.java:155)
18:46:57.151 INFO DAGScheduler - Parents of final stage: List()
18:46:57.151 INFO DAGScheduler - Missing parents: List()
18:46:57.151 INFO DAGScheduler - Submitting ResultStage 284 (MapPartitionsRDD[1395] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:57.181 INFO MemoryStore - Block broadcast_586 stored as values in memory (estimated size 426.2 KiB, free 1918.6 MiB)
18:46:57.182 INFO MemoryStore - Block broadcast_586_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1918.4 MiB)
18:46:57.182 INFO BlockManagerInfo - Added broadcast_586_piece0 in memory on localhost:45727 (size: 153.7 KiB, free: 1919.7 MiB)
18:46:57.183 INFO SparkContext - Created broadcast 586 from broadcast at DAGScheduler.scala:1580
18:46:57.183 INFO DAGScheduler - Submitting 2 missing tasks from ResultStage 284 (MapPartitionsRDD[1395] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0, 1))
18:46:57.183 INFO TaskSchedulerImpl - Adding task set 284.0 with 2 tasks resource profile 0
18:46:57.183 INFO TaskSetManager - Starting task 0.0 in stage 284.0 (TID 370) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7928 bytes)
18:46:57.184 INFO TaskSetManager - Starting task 1.0 in stage 284.0 (TID 371) (localhost, executor driver, partition 1, PROCESS_LOCAL, 7928 bytes)
18:46:57.184 INFO Executor - Running task 1.0 in stage 284.0 (TID 371)
18:46:57.184 INFO Executor - Running task 0.0 in stage 284.0 (TID 370)
18:46:57.223 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.sharded.bam/part-r-00000.bam:0+131967
18:46:57.231 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.recalibrated.DIQ.sharded.bam/part-r-00001.bam:0+131967
18:46:57.232 INFO Executor - Finished task 1.0 in stage 284.0 (TID 371). 325590 bytes result sent to driver
18:46:57.233 INFO TaskSetManager - Finished task 1.0 in stage 284.0 (TID 371) in 50 ms on localhost (executor driver) (1/2)
18:46:57.245 INFO Executor - Finished task 0.0 in stage 284.0 (TID 370). 325590 bytes result sent to driver
18:46:57.246 INFO TaskSetManager - Finished task 0.0 in stage 284.0 (TID 370) in 63 ms on localhost (executor driver) (2/2)
18:46:57.246 INFO TaskSchedulerImpl - Removed TaskSet 284.0, whose tasks have all completed, from pool
18:46:57.246 INFO DAGScheduler - ResultStage 284 (collect at ReadsSparkSourceUnitTest.java:155) finished in 0.095 s
18:46:57.246 INFO DAGScheduler - Job 220 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:57.246 INFO TaskSchedulerImpl - Killing all running tasks in stage 284: Stage finished
18:46:57.246 INFO DAGScheduler - Job 220 finished: collect at ReadsSparkSourceUnitTest.java:155, took 0.095813 s
18:46:57.259 INFO MiniDFSCluster - starting cluster: numNameNodes=1, numDataNodes=1
18:46:57.260 INFO NameNode - Formatting using clusterid: testClusterID
18:46:57.260 INFO FSEditLog - Edit logging is async:true
18:46:57.269 INFO FSNamesystem - KeyProvider: null
18:46:57.269 INFO FSNamesystem - fsLock is fair: true
18:46:57.269 INFO FSNamesystem - Detailed lock hold time metrics enabled: false
18:46:57.269 INFO FSNamesystem - fsOwner = runner (auth:SIMPLE)
18:46:57.269 INFO FSNamesystem - supergroup = supergroup
18:46:57.269 INFO FSNamesystem - isPermissionEnabled = true
18:46:57.269 INFO FSNamesystem - isStoragePolicyEnabled = true
18:46:57.269 INFO FSNamesystem - HA Enabled: false
18:46:57.269 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
18:46:57.269 INFO DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
18:46:57.269 INFO DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
18:46:57.269 INFO BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
18:46:57.270 INFO BlockManager - The block deletion will start around 2025 May 19 18:46:57
18:46:57.270 INFO GSet - Computing capacity for map BlocksMap
18:46:57.270 INFO GSet - VM type = 64-bit
18:46:57.270 INFO GSet - 2.0% max memory 3.4 GB = 70 MB
18:46:57.270 INFO GSet - capacity = 2^23 = 8388608 entries
18:46:57.271 INFO BlockManager - Storage policy satisfier is disabled
18:46:57.271 INFO BlockManager - dfs.block.access.token.enable = false
18:46:57.271 INFO BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
18:46:57.271 INFO BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
18:46:57.271 INFO BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
18:46:57.271 INFO BlockManager - defaultReplication = 1
18:46:57.271 INFO BlockManager - maxReplication = 512
18:46:57.271 INFO BlockManager - minReplication = 1
18:46:57.271 INFO BlockManager - maxReplicationStreams = 2
18:46:57.271 INFO BlockManager - redundancyRecheckInterval = 3000ms
18:46:57.271 INFO BlockManager - encryptDataTransfer = false
18:46:57.271 INFO BlockManager - maxNumBlocksToLog = 1000
18:46:57.272 INFO GSet - Computing capacity for map INodeMap
18:46:57.272 INFO GSet - VM type = 64-bit
18:46:57.272 INFO GSet - 1.0% max memory 3.4 GB = 35 MB
18:46:57.272 INFO GSet - capacity = 2^22 = 4194304 entries
18:46:57.278 INFO FSDirectory - ACLs enabled? true
18:46:57.278 INFO FSDirectory - POSIX ACL inheritance enabled? true
18:46:57.278 INFO FSDirectory - XAttrs enabled? true
18:46:57.278 INFO NameNode - Caching file names occurring more than 10 times
18:46:57.278 INFO SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
18:46:57.278 INFO SnapshotManager - SkipList is disabled
18:46:57.278 INFO BlockManagerInfo - Removed broadcast_586_piece0 on localhost:45727 in memory (size: 153.7 KiB, free: 1919.9 MiB)
18:46:57.278 INFO GSet - Computing capacity for map cachedBlocks
18:46:57.278 INFO GSet - VM type = 64-bit
18:46:57.278 INFO GSet - 0.25% max memory 3.4 GB = 8.8 MB
18:46:57.278 INFO GSet - capacity = 2^20 = 1048576 entries
18:46:57.278 INFO TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
18:46:57.278 INFO TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
18:46:57.278 INFO TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
18:46:57.278 INFO FSNamesystem - Retry cache on namenode is enabled
18:46:57.278 INFO FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18:46:57.278 INFO GSet - Computing capacity for map NameNodeRetryCache
18:46:57.278 INFO BlockManagerInfo - Removed broadcast_585_piece0 on localhost:45727 in memory (size: 1778.0 B, free: 1919.9 MiB)
18:46:57.278 INFO GSet - VM type = 64-bit
18:46:57.278 INFO GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
18:46:57.278 INFO GSet - capacity = 2^17 = 131072 entries
18:46:57.279 INFO BlockManagerInfo - Removed broadcast_584_piece0 on localhost:45727 in memory (size: 50.3 KiB, free: 1919.9 MiB)
18:46:57.279 INFO FSImage - Allocated new BlockPoolId: BP-2078786547-10.1.0.176-1747680417279
18:46:57.280 INFO BlockManagerInfo - Removed broadcast_582_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1920.0 MiB)
18:46:57.280 INFO BlockManagerInfo - Removed broadcast_583_piece0 on localhost:45727 in memory (size: 50.3 KiB, free: 1920.0 MiB)
18:46:57.281 INFO BlockManagerInfo - Removed broadcast_581_piece0 on localhost:45727 in memory (size: 187.0 B, free: 1920.0 MiB)
18:46:57.282 INFO Storage - Storage directory /tmp/minicluster_storage10586223875571632427/name-0-1 has been successfully formatted.
18:46:57.284 INFO Storage - Storage directory /tmp/minicluster_storage10586223875571632427/name-0-2 has been successfully formatted.
18:46:57.296 INFO FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage10586223875571632427/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression
18:46:57.296 INFO FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage10586223875571632427/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression
18:46:57.302 INFO FSImageFormatProtobuf - Image file /tmp/minicluster_storage10586223875571632427/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
18:46:57.302 INFO FSImageFormatProtobuf - Image file /tmp/minicluster_storage10586223875571632427/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
18:46:57.304 INFO NNStorageRetentionManager - Going to retain 1 images with txid >= 0
18:46:57.309 INFO FSNamesystem - Stopping services started for active state
18:46:57.309 INFO FSNamesystem - Stopping services started for standby state
18:46:57.309 INFO NameNode - createNameNode []
18:46:57.311 WARN MetricsConfig - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
18:46:57.312 INFO MetricsSystemImpl - Scheduled Metric snapshot period at 10 second(s).
18:46:57.312 INFO MetricsSystemImpl - NameNode metrics system started
18:46:57.312 INFO NameNodeUtils - fs.defaultFS is hdfs://127.0.0.1:0
18:46:57.326 INFO JvmPauseMonitor - Starting JVM pause monitor
18:46:57.326 INFO DFSUtil - Filter initializers set : org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
18:46:57.326 INFO DFSUtil - Starting Web-server for hdfs at: http://localhost:0
18:46:57.328 WARN AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
18:46:57.328 WARN HttpRequestLog - Jetty request log can only be enabled using Log4j
18:46:57.329 INFO HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
18:46:57.330 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
18:46:57.330 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
18:46:57.332 INFO HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context hdfs
18:46:57.332 INFO HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context static
18:46:57.332 INFO HttpServer2 - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
18:46:57.332 INFO HttpServer2 - Jetty bound to port 35335
18:46:57.332 INFO Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
18:46:57.334 INFO session - DefaultSessionIdManager workerName=node0
18:46:57.334 INFO session - No SessionScavenger set, using defaults
18:46:57.334 INFO session - node0 Scavenging every 660000ms
18:46:57.334 WARN AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
18:46:57.334 INFO ContextHandler - Started o.e.j.s.ServletContextHandler@42d0ab29{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
18:46:57.422 INFO ContextHandler - Started o.e.j.w.WebAppContext@325e164a{hdfs,/,file:///tmp/jetty-localhost-35335-hadoop-hdfs-3_3_6-tests_jar-_-any-5104433684907455349/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
18:46:57.423 INFO AbstractConnector - Started ServerConnector@330af3ff{HTTP/1.1, (http/1.1)}{localhost:35335}
18:46:57.423 INFO Server - Started @75924ms
18:46:57.423 INFO FSEditLog - Edit logging is async:true
18:46:57.431 INFO FSNamesystem - KeyProvider: null
18:46:57.432 INFO FSNamesystem - fsLock is fair: true
18:46:57.432 INFO FSNamesystem - Detailed lock hold time metrics enabled: false
18:46:57.432 INFO FSNamesystem - fsOwner = runner (auth:SIMPLE)
18:46:57.432 INFO FSNamesystem - supergroup = supergroup
18:46:57.432 INFO FSNamesystem - isPermissionEnabled = true
18:46:57.432 INFO FSNamesystem - isStoragePolicyEnabled = true
18:46:57.432 INFO FSNamesystem - HA Enabled: false
18:46:57.432 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
18:46:57.432 INFO DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
18:46:57.432 INFO DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
18:46:57.432 INFO BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
18:46:57.432 INFO BlockManager - The block deletion will start around 2025 May 19 18:46:57
18:46:57.432 INFO GSet - Computing capacity for map BlocksMap
18:46:57.432 INFO GSet - VM type = 64-bit
18:46:57.432 INFO GSet - 2.0% max memory 3.4 GB = 70 MB
18:46:57.432 INFO GSet - capacity = 2^23 = 8388608 entries
18:46:57.438 INFO BlockManager - Storage policy satisfier is disabled
18:46:57.438 INFO BlockManager - dfs.block.access.token.enable = false
18:46:57.438 INFO BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
18:46:57.438 INFO BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
18:46:57.438 INFO BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
18:46:57.438 INFO BlockManager - defaultReplication = 1
18:46:57.438 INFO BlockManager - maxReplication = 512
18:46:57.438 INFO BlockManager - minReplication = 1
18:46:57.438 INFO BlockManager - maxReplicationStreams = 2
18:46:57.438 INFO BlockManager - redundancyRecheckInterval = 3000ms
18:46:57.438 INFO BlockManager - encryptDataTransfer = false
18:46:57.438 INFO BlockManager - maxNumBlocksToLog = 1000
18:46:57.438 INFO GSet - Computing capacity for map INodeMap
18:46:57.438 INFO GSet - VM type = 64-bit
18:46:57.439 INFO GSet - 1.0% max memory 3.4 GB = 35 MB
18:46:57.439 INFO GSet - capacity = 2^22 = 4194304 entries
18:46:57.443 INFO FSDirectory - ACLs enabled? true
18:46:57.443 INFO FSDirectory - POSIX ACL inheritance enabled? true
18:46:57.443 INFO FSDirectory - XAttrs enabled? true
18:46:57.443 INFO NameNode - Caching file names occurring more than 10 times
18:46:57.443 INFO SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
18:46:57.443 INFO SnapshotManager - SkipList is disabled
18:46:57.443 INFO GSet - Computing capacity for map cachedBlocks
18:46:57.443 INFO GSet - VM type = 64-bit
18:46:57.443 INFO GSet - 0.25% max memory 3.4 GB = 8.8 MB
18:46:57.443 INFO GSet - capacity = 2^20 = 1048576 entries
18:46:57.443 INFO TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
18:46:57.443 INFO TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
18:46:57.443 INFO TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
18:46:57.443 INFO FSNamesystem - Retry cache on namenode is enabled
18:46:57.443 INFO FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18:46:57.443 INFO GSet - Computing capacity for map NameNodeRetryCache
18:46:57.443 INFO GSet - VM type = 64-bit
18:46:57.443 INFO GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
18:46:57.443 INFO GSet - capacity = 2^17 = 131072 entries
18:46:57.445 INFO Storage - Lock on /tmp/minicluster_storage10586223875571632427/name-0-1/in_use.lock acquired by nodename 3117@pkrvmf6wy0o8zjz
18:46:57.446 INFO Storage - Lock on /tmp/minicluster_storage10586223875571632427/name-0-2/in_use.lock acquired by nodename 3117@pkrvmf6wy0o8zjz
18:46:57.446 INFO FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage10586223875571632427/name-0-1/current
18:46:57.446 INFO FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage10586223875571632427/name-0-2/current
18:46:57.446 INFO FSImage - No edit log streams selected.
18:46:57.446 INFO FSImage - Planning to load image: FSImageFile(file=/tmp/minicluster_storage10586223875571632427/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
18:46:57.447 INFO FSImageFormatPBINode - Loading 1 INodes.
18:46:57.448 INFO FSImageFormatPBINode - Successfully loaded 1 inodes
18:46:57.448 INFO FSImageFormatPBINode - Completed update blocks map and name cache, total waiting duration 0ms.
18:46:57.448 INFO FSImageFormatProtobuf - Loaded FSImage in 0 seconds.
18:46:57.448 INFO FSImage - Loaded image for txid 0 from /tmp/minicluster_storage10586223875571632427/name-0-1/current/fsimage_0000000000000000000
18:46:57.448 INFO FSNamesystem - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
18:46:57.448 INFO FSEditLog - Starting log segment at 1
18:46:57.451 INFO NameCache - initialized with 0 entries 0 lookups
18:46:57.451 INFO FSNamesystem - Finished loading FSImage in 6 msecs
18:46:57.451 INFO NameNode - RPC server is binding to localhost:0
18:46:57.451 INFO NameNode - Enable NameNode state context:false
18:46:57.451 INFO CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
18:46:57.451 INFO Server - Listener at localhost:34389
18:46:57.452 INFO Server - Starting Socket Reader #1 for port 0
18:46:57.454 INFO NameNode - Clients are to use localhost:34389 to access this namenode/service.
18:46:57.454 INFO FSNamesystem - Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans.
18:46:57.464 INFO LeaseManager - Number of blocks under construction: 0
18:46:57.464 INFO DatanodeAdminDefaultMonitor - Initialized the Default Decommission and Maintenance monitor
18:46:57.465 INFO BlockManager - Start MarkedDeleteBlockScrubber thread
18:46:57.465 INFO BlockManager - initializing replication queues
18:46:57.465 INFO StateChange - STATE* Leaving safe mode after 0 secs
18:46:57.465 INFO StateChange - STATE* Network topology has 0 racks and 0 datanodes
18:46:57.465 INFO StateChange - STATE* UnderReplicatedBlocks has 0 blocks
18:46:57.467 INFO Server - IPC Server Responder: starting
18:46:57.467 INFO Server - IPC Server listener on 0: starting
18:46:57.469 INFO NameNode - NameNode RPC up at: localhost/127.0.0.1:34389
18:46:57.469 WARN MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
18:46:57.469 INFO FSNamesystem - Starting services required for active state
18:46:57.469 INFO FSDirectory - Initializing quota with 12 thread(s)
18:46:57.469 INFO FSDirectory - Quota initialization completed in 1 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
18:46:57.470 INFO BlockManager - Total number of blocks = 0
18:46:57.470 INFO BlockManager - Number of invalid blocks = 0
18:46:57.470 INFO BlockManager - Number of under-replicated blocks = 0
18:46:57.470 INFO BlockManager - Number of over-replicated blocks = 0
18:46:57.470 INFO BlockManager - Number of blocks being written = 0
18:46:57.470 INFO StateChange - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 4 msec
18:46:57.470 INFO CacheReplicationMonitor - Starting CacheReplicationMonitor with interval 30000 milliseconds
18:46:57.472 INFO MiniDFSCluster - Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/tmp/minicluster_storage10586223875571632427/data/data1,[DISK]file:/tmp/minicluster_storage10586223875571632427/data/data2
18:46:57.472 INFO ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage10586223875571632427/data/data1
18:46:57.473 INFO ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage10586223875571632427/data/data2
18:46:57.485 INFO MetricsSystemImpl - DataNode metrics system started (again)
18:46:57.485 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
18:46:57.485 INFO BlockScanner - Initialized block scanner with targetBytesPerSec 1048576
18:46:57.485 INFO DataNode - Configured hostname is 127.0.0.1
18:46:57.485 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
18:46:57.485 INFO DataNode - Starting DataNode with maxLockedMemory = 0
18:46:57.485 INFO DataNode - Opened streaming server at /127.0.0.1:38307
18:46:57.485 INFO DataNode - Balancing bandwidth is 104857600 bytes/s
18:46:57.486 INFO DataNode - Number threads for balancing is 100
18:46:57.487 WARN AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
18:46:57.488 WARN HttpRequestLog - Jetty request log can only be enabled using Log4j
18:46:57.489 INFO HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
18:46:57.490 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
18:46:57.490 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
18:46:57.490 INFO HttpServer2 - Jetty bound to port 33797
18:46:57.490 INFO Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
18:46:57.491 INFO session - DefaultSessionIdManager workerName=node0
18:46:57.491 INFO session - No SessionScavenger set, using defaults
18:46:57.491 INFO session - node0 Scavenging every 600000ms
18:46:57.491 INFO ContextHandler - Started o.e.j.s.ServletContextHandler@5748052d{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
18:46:57.625 INFO ContextHandler - Started o.e.j.w.WebAppContext@27ae4ddd{datanode,/,file:///tmp/jetty-localhost-33797-hadoop-hdfs-3_3_6-tests_jar-_-any-11166007995197276174/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
18:46:57.626 INFO AbstractConnector - Started ServerConnector@521d7df{HTTP/1.1, (http/1.1)}{localhost:33797}
18:46:57.626 INFO Server - Started @76127ms
18:46:57.627 WARN DatanodeHttpServer - Got null for restCsrfPreventionFilter - will not do any filtering.
18:46:57.628 INFO DatanodeHttpServer - Listening HTTP traffic on /127.0.0.1:35865
18:46:57.628 INFO DataNode - dnUserName = runner
18:46:57.628 INFO DataNode - supergroup = supergroup
18:46:57.628 INFO JvmPauseMonitor - Starting JVM pause monitor
18:46:57.628 INFO CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
18:46:57.628 INFO Server - Listener at localhost:44793
18:46:57.631 INFO DataNode - Opened IPC server at /127.0.0.1:44793
18:46:57.631 INFO Server - Starting Socket Reader #1 for port 0
18:46:57.645 INFO DataNode - Refresh request received for nameservices: null
18:46:57.645 INFO DataNode - Starting BPOfferServices for nameservices: <default>
18:46:57.647 INFO DataNode - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:34389 starting to offer service
18:46:57.648 WARN MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
18:46:57.650 INFO Server - IPC Server Responder: starting
18:46:57.650 INFO Server - IPC Server listener on 0: starting
18:46:57.652 INFO DataNode - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:34389
18:46:57.652 INFO Storage - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2)
18:46:57.653 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=datanodeReport src=null dst=null perm=null proto=rpc
18:46:57.653 INFO Storage - Lock on /tmp/minicluster_storage10586223875571632427/data/data1/in_use.lock acquired by nodename 3117@pkrvmf6wy0o8zjz
18:46:57.653 INFO Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage10586223875571632427/data/data1 is not formatted for namespace 1513780436. Formatting...
18:46:57.653 INFO MiniDFSCluster - dnInfo.length != numDataNodes
18:46:57.653 INFO MiniDFSCluster - Waiting for cluster to become active
18:46:57.653 INFO Storage - Generated new storageID DS-86ba3c1b-c840-44a3-b780-d0a14742c5e3 for directory /tmp/minicluster_storage10586223875571632427/data/data1
18:46:57.655 INFO Storage - Lock on /tmp/minicluster_storage10586223875571632427/data/data2/in_use.lock acquired by nodename 3117@pkrvmf6wy0o8zjz
18:46:57.655 INFO Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage10586223875571632427/data/data2 is not formatted for namespace 1513780436. Formatting...
18:46:57.655 INFO Storage - Generated new storageID DS-839b37db-fd6c-4e5c-9f3f-3701bfc12009 for directory /tmp/minicluster_storage10586223875571632427/data/data2
18:46:57.674 INFO Storage - Analyzing storage directories for bpid BP-2078786547-10.1.0.176-1747680417279
18:46:57.674 INFO Storage - Locking is disabled for /tmp/minicluster_storage10586223875571632427/data/data1/current/BP-2078786547-10.1.0.176-1747680417279
18:46:57.674 INFO Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage10586223875571632427/data/data1 and block pool id BP-2078786547-10.1.0.176-1747680417279 is not formatted. Formatting ...
18:46:57.674 INFO Storage - Formatting block pool BP-2078786547-10.1.0.176-1747680417279 directory /tmp/minicluster_storage10586223875571632427/data/data1/current/BP-2078786547-10.1.0.176-1747680417279/current
18:46:57.691 INFO Storage - Analyzing storage directories for bpid BP-2078786547-10.1.0.176-1747680417279
18:46:57.691 INFO Storage - Locking is disabled for /tmp/minicluster_storage10586223875571632427/data/data2/current/BP-2078786547-10.1.0.176-1747680417279
18:46:57.691 INFO Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage10586223875571632427/data/data2 and block pool id BP-2078786547-10.1.0.176-1747680417279 is not formatted. Formatting ...
18:46:57.691 INFO Storage - Formatting block pool BP-2078786547-10.1.0.176-1747680417279 directory /tmp/minicluster_storage10586223875571632427/data/data2/current/BP-2078786547-10.1.0.176-1747680417279/current
18:46:57.693 INFO DataNode - Setting up storage: nsid=1513780436;bpid=BP-2078786547-10.1.0.176-1747680417279;lv=-57;nsInfo=lv=-66;cid=testClusterID;nsid=1513780436;c=1747680417279;bpid=BP-2078786547-10.1.0.176-1747680417279;dnuuid=null
18:46:57.694 INFO DataNode - Generated and persisted new Datanode UUID 9bfc734e-24cf-44df-8af6-321e62d120ef
18:46:57.694 INFO FsDatasetImpl - The datanode lock is a read write lock
18:46:57.695 INFO FsDatasetImpl - Added new volume: DS-86ba3c1b-c840-44a3-b780-d0a14742c5e3
18:46:57.695 INFO FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage10586223875571632427/data/data1, StorageType: DISK
18:46:57.696 INFO FsDatasetImpl - Added new volume: DS-839b37db-fd6c-4e5c-9f3f-3701bfc12009
18:46:57.696 INFO FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage10586223875571632427/data/data2, StorageType: DISK
18:46:57.696 INFO MemoryMappableBlockLoader - Initializing cache loader: MemoryMappableBlockLoader.
18:46:57.696 INFO FsDatasetImpl - Registered FSDatasetState MBean
18:46:57.696 INFO FsDatasetImpl - Adding block pool BP-2078786547-10.1.0.176-1747680417279
18:46:57.697 INFO FsDatasetImpl - Scanning block pool BP-2078786547-10.1.0.176-1747680417279 on volume /tmp/minicluster_storage10586223875571632427/data/data1...
18:46:57.697 INFO FsDatasetImpl - Scanning block pool BP-2078786547-10.1.0.176-1747680417279 on volume /tmp/minicluster_storage10586223875571632427/data/data2...
18:46:57.697 WARN FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage10586223875571632427/data/data1/current/BP-2078786547-10.1.0.176-1747680417279/current, will proceed with Du for space computation calculation,
18:46:57.697 WARN FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage10586223875571632427/data/data2/current/BP-2078786547-10.1.0.176-1747680417279/current, will proceed with Du for space computation calculation,
18:46:57.712 INFO FsDatasetImpl - Time taken to scan block pool BP-2078786547-10.1.0.176-1747680417279 on /tmp/minicluster_storage10586223875571632427/data/data1: 16ms
18:46:57.712 INFO FsDatasetImpl - Time taken to scan block pool BP-2078786547-10.1.0.176-1747680417279 on /tmp/minicluster_storage10586223875571632427/data/data2: 16ms
18:46:57.712 INFO FsDatasetImpl - Total time to scan all replicas for block pool BP-2078786547-10.1.0.176-1747680417279: 16ms
18:46:57.712 INFO FsDatasetImpl - Adding replicas to map for block pool BP-2078786547-10.1.0.176-1747680417279 on volume /tmp/minicluster_storage10586223875571632427/data/data1...
18:46:57.712 INFO BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage10586223875571632427/data/data1/current/BP-2078786547-10.1.0.176-1747680417279/current/replicas doesn't exist
18:46:57.713 INFO FsDatasetImpl - Time to add replicas to map for block pool BP-2078786547-10.1.0.176-1747680417279 on volume /tmp/minicluster_storage10586223875571632427/data/data1: 0ms
18:46:57.713 INFO FsDatasetImpl - Adding replicas to map for block pool BP-2078786547-10.1.0.176-1747680417279 on volume /tmp/minicluster_storage10586223875571632427/data/data2...
18:46:57.713 INFO BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage10586223875571632427/data/data2/current/BP-2078786547-10.1.0.176-1747680417279/current/replicas doesn't exist
18:46:57.713 INFO FsDatasetImpl - Time to add replicas to map for block pool BP-2078786547-10.1.0.176-1747680417279 on volume /tmp/minicluster_storage10586223875571632427/data/data2: 0ms
18:46:57.713 INFO FsDatasetImpl - Total time to add all replicas to map for block pool BP-2078786547-10.1.0.176-1747680417279: 0ms
18:46:57.713 INFO ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage10586223875571632427/data/data1
18:46:57.713 INFO DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage10586223875571632427/data/data1
18:46:57.713 INFO ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage10586223875571632427/data/data2
18:46:57.713 INFO DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage10586223875571632427/data/data2
18:46:57.714 INFO VolumeScanner - Now scanning bpid BP-2078786547-10.1.0.176-1747680417279 on volume /tmp/minicluster_storage10586223875571632427/data/data2
18:46:57.714 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage10586223875571632427/data/data2, DS-839b37db-fd6c-4e5c-9f3f-3701bfc12009): finished scanning block pool BP-2078786547-10.1.0.176-1747680417279
18:46:57.714 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage10586223875571632427/data/data2, DS-839b37db-fd6c-4e5c-9f3f-3701bfc12009): no suitable block pools found to scan. Waiting 1814400000 ms.
18:46:57.714 INFO VolumeScanner - Now scanning bpid BP-2078786547-10.1.0.176-1747680417279 on volume /tmp/minicluster_storage10586223875571632427/data/data1
18:46:57.714 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage10586223875571632427/data/data1, DS-86ba3c1b-c840-44a3-b780-d0a14742c5e3): finished scanning block pool BP-2078786547-10.1.0.176-1747680417279
18:46:57.714 WARN DirectoryScanner - dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value above 1000 ms/sec. Assuming default value of -1
18:46:57.715 INFO DirectoryScanner - Periodic Directory Tree Verification scan starting in 841413ms with interval of 21600000ms and throttle limit of -1ms/s
18:46:57.715 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage10586223875571632427/data/data1, DS-86ba3c1b-c840-44a3-b780-d0a14742c5e3): no suitable block pools found to scan. Waiting 1814399999 ms.
18:46:57.715 INFO DataNode - Block pool BP-2078786547-10.1.0.176-1747680417279 (Datanode Uuid 9bfc734e-24cf-44df-8af6-321e62d120ef) service to localhost/127.0.0.1:34389 beginning handshake with NN
18:46:57.715 INFO StateChange - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:38307, datanodeUuid=9bfc734e-24cf-44df-8af6-321e62d120ef, infoPort=35865, infoSecurePort=0, ipcPort=44793, storageInfo=lv=-57;cid=testClusterID;nsid=1513780436;c=1747680417279) storage 9bfc734e-24cf-44df-8af6-321e62d120ef
18:46:57.715 INFO NetworkTopology - Adding a new node: /default-rack/127.0.0.1:38307
18:46:57.716 INFO BlockReportLeaseManager - Registered DN 9bfc734e-24cf-44df-8af6-321e62d120ef (127.0.0.1:38307).
18:46:57.716 INFO DataNode - Block pool BP-2078786547-10.1.0.176-1747680417279 (Datanode Uuid 9bfc734e-24cf-44df-8af6-321e62d120ef) service to localhost/127.0.0.1:34389 successfully registered with NN
18:46:57.716 INFO DataNode - For namenode localhost/127.0.0.1:34389 using BLOCKREPORT_INTERVAL of 21600000msecs CACHEREPORT_INTERVAL of 10000msecs Initial delay: 0msecs; heartBeatInterval=3000
18:46:57.716 INFO DataNode - Starting IBR Task Handler.
18:46:57.716 INFO DatanodeDescriptor - Adding new storage ID DS-86ba3c1b-c840-44a3-b780-d0a14742c5e3 for DN 127.0.0.1:38307
18:46:57.717 INFO DatanodeDescriptor - Adding new storage ID DS-839b37db-fd6c-4e5c-9f3f-3701bfc12009 for DN 127.0.0.1:38307
18:46:57.717 INFO DataNode - After receiving heartbeat response, updating state of namenode localhost:34389 to active
18:46:57.717 INFO BlockStateChange - BLOCK* processReport 0x25a901acc574ad37 with lease ID 0x501238ac96a846ab: Processing first storage report for DS-839b37db-fd6c-4e5c-9f3f-3701bfc12009 from datanode DatanodeRegistration(127.0.0.1:38307, datanodeUuid=9bfc734e-24cf-44df-8af6-321e62d120ef, infoPort=35865, infoSecurePort=0, ipcPort=44793, storageInfo=lv=-57;cid=testClusterID;nsid=1513780436;c=1747680417279)
18:46:57.717 INFO BlockStateChange - BLOCK* processReport 0x25a901acc574ad37 with lease ID 0x501238ac96a846ab: from storage DS-839b37db-fd6c-4e5c-9f3f-3701bfc12009 node DatanodeRegistration(127.0.0.1:38307, datanodeUuid=9bfc734e-24cf-44df-8af6-321e62d120ef, infoPort=35865, infoSecurePort=0, ipcPort=44793, storageInfo=lv=-57;cid=testClusterID;nsid=1513780436;c=1747680417279), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0
18:46:57.717 INFO BlockStateChange - BLOCK* processReport 0x25a901acc574ad37 with lease ID 0x501238ac96a846ab: Processing first storage report for DS-86ba3c1b-c840-44a3-b780-d0a14742c5e3 from datanode DatanodeRegistration(127.0.0.1:38307, datanodeUuid=9bfc734e-24cf-44df-8af6-321e62d120ef, infoPort=35865, infoSecurePort=0, ipcPort=44793, storageInfo=lv=-57;cid=testClusterID;nsid=1513780436;c=1747680417279)
18:46:57.718 INFO BlockStateChange - BLOCK* processReport 0x25a901acc574ad37 with lease ID 0x501238ac96a846ab: from storage DS-86ba3c1b-c840-44a3-b780-d0a14742c5e3 node DatanodeRegistration(127.0.0.1:38307, datanodeUuid=9bfc734e-24cf-44df-8af6-321e62d120ef, infoPort=35865, infoSecurePort=0, ipcPort=44793, storageInfo=lv=-57;cid=testClusterID;nsid=1513780436;c=1747680417279), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
18:46:57.718 INFO DataNode - Successfully sent block report 0x25a901acc574ad37 with lease ID 0x501238ac96a846ab to namenode: localhost/127.0.0.1:34389, containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 0 msecs to generate and 1 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
18:46:57.718 INFO DataNode - Got finalize command for block pool BP-2078786547-10.1.0.176-1747680417279
18:46:57.754 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=datanodeReport src=null dst=null perm=null proto=rpc
18:46:57.754 INFO MiniDFSCluster - Cluster is active
18:46:57.756 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:57.758 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/user/runner/hdfs.cram dst=null perm=runner:supergroup:rw-r--r-- proto=rpc
18:46:57.759 INFO StateChange - BLOCK* allocate blk_1073741825_1001, replicas=127.0.0.1:38307 for /user/runner/hdfs.cram
18:46:57.760 INFO DataNode - Receiving BP-2078786547-10.1.0.176-1747680417279:blk_1073741825_1001 src: /127.0.0.1:52248 dest: /127.0.0.1:38307
18:46:57.762 INFO clienttrace - src: /127.0.0.1:52248, dest: /127.0.0.1:38307, bytes: 50619, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1955231941_1, offset: 0, srvID: 9bfc734e-24cf-44df-8af6-321e62d120ef, blockid: BP-2078786547-10.1.0.176-1747680417279:blk_1073741825_1001, duration(ns): 595361
18:46:57.762 INFO DataNode - PacketResponder: BP-2078786547-10.1.0.176-1747680417279:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating
18:46:57.762 INFO FSNamesystem - BLOCK* blk_1073741825_1001 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /user/runner/hdfs.cram
18:46:58.163 INFO StateChange - DIR* completeFile: /user/runner/hdfs.cram is closed by DFSClient_NONMAPREDUCE_1955231941_1
18:46:58.164 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
18:46:58.165 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/user/runner/hdfs.fasta dst=null perm=runner:supergroup:rw-r--r-- proto=rpc
18:46:58.166 INFO StateChange - BLOCK* allocate blk_1073741826_1002, replicas=127.0.0.1:38307 for /user/runner/hdfs.fasta
18:46:58.167 INFO DataNode - Receiving BP-2078786547-10.1.0.176-1747680417279:blk_1073741826_1002 src: /127.0.0.1:34012 dest: /127.0.0.1:38307
18:46:58.171 INFO clienttrace - src: /127.0.0.1:34012, dest: /127.0.0.1:38307, bytes: 1016671, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1955231941_1, offset: 0, srvID: 9bfc734e-24cf-44df-8af6-321e62d120ef, blockid: BP-2078786547-10.1.0.176-1747680417279:blk_1073741826_1002, duration(ns): 2906385
18:46:58.171 INFO DataNode - PacketResponder: BP-2078786547-10.1.0.176-1747680417279:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating
18:46:58.172 INFO StateChange - DIR* completeFile: /user/runner/hdfs.fasta is closed by DFSClient_NONMAPREDUCE_1955231941_1
18:46:58.172 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta.fai dst=null perm=null proto=rpc
18:46:58.173 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/user/runner/hdfs.fasta.fai dst=null perm=runner:supergroup:rw-r--r-- proto=rpc
18:46:58.174 INFO StateChange - BLOCK* allocate blk_1073741827_1003, replicas=127.0.0.1:38307 for /user/runner/hdfs.fasta.fai
18:46:58.175 INFO DataNode - Receiving BP-2078786547-10.1.0.176-1747680417279:blk_1073741827_1003 src: /127.0.0.1:34022 dest: /127.0.0.1:38307
18:46:58.176 INFO clienttrace - src: /127.0.0.1:34022, dest: /127.0.0.1:38307, bytes: 19, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1955231941_1, offset: 0, srvID: 9bfc734e-24cf-44df-8af6-321e62d120ef, blockid: BP-2078786547-10.1.0.176-1747680417279:blk_1073741827_1003, duration(ns): 330657
18:46:58.176 INFO DataNode - PacketResponder: BP-2078786547-10.1.0.176-1747680417279:blk_1073741827_1003, type=LAST_IN_PIPELINE terminating
18:46:58.176 INFO StateChange - DIR* completeFile: /user/runner/hdfs.fasta.fai is closed by DFSClient_NONMAPREDUCE_1955231941_1
18:46:58.177 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
18:46:58.177 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.177 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.178 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.178 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.179 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram.crai dst=null perm=null proto=rpc
18:46:58.179 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.crai dst=null perm=null proto=rpc
18:46:58.179 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
18:46:58.180 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
18:46:58.180 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta.fai dst=null perm=null proto=rpc
18:46:58.181 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.fasta.fai dst=null perm=null proto=rpc
18:46:58.182 WARN DFSUtil - Unexpected value for data transfer bytes=23 duration=0
18:46:58.183 WARN DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
18:46:58.183 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.184 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.184 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram.crai dst=null perm=null proto=rpc
18:46:58.184 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.crai dst=null perm=null proto=rpc
18:46:58.185 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.185 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.186 WARN DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
18:46:58.186 INFO MemoryStore - Block broadcast_587 stored as values in memory (estimated size 472.0 B, free 1920.0 MiB)
18:46:58.187 INFO MemoryStore - Block broadcast_587_piece0 stored as bytes in memory (estimated size 129.0 B, free 1920.0 MiB)
18:46:58.187 INFO BlockManagerInfo - Added broadcast_587_piece0 in memory on localhost:45727 (size: 129.0 B, free: 1920.0 MiB)
18:46:58.187 INFO SparkContext - Created broadcast 587 from broadcast at CramSource.java:114
18:46:58.188 INFO MemoryStore - Block broadcast_588 stored as values in memory (estimated size 297.9 KiB, free 1919.7 MiB)
18:46:58.194 INFO MemoryStore - Block broadcast_588_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
18:46:58.194 INFO BlockManagerInfo - Added broadcast_588_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1920.0 MiB)
18:46:58.194 INFO SparkContext - Created broadcast 588 from newAPIHadoopFile at PathSplitSource.java:96
18:46:58.209 INFO MemoryStore - Block broadcast_589 stored as values in memory (estimated size 536.0 B, free 1919.7 MiB)
18:46:58.209 INFO MemoryStore - Block broadcast_589_piece0 stored as bytes in memory (estimated size 187.0 B, free 1919.7 MiB)
18:46:58.210 INFO BlockManagerInfo - Added broadcast_589_piece0 in memory on localhost:45727 (size: 187.0 B, free: 1920.0 MiB)
18:46:58.210 INFO SparkContext - Created broadcast 589 from broadcast at CramSource.java:114
18:46:58.211 INFO MemoryStore - Block broadcast_590 stored as values in memory (estimated size 297.9 KiB, free 1919.4 MiB)
18:46:58.219 INFO MemoryStore - Block broadcast_590_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.3 MiB)
18:46:58.219 INFO BlockManagerInfo - Added broadcast_590_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.9 MiB)
18:46:58.219 INFO SparkContext - Created broadcast 590 from newAPIHadoopFile at PathSplitSource.java:96
18:46:58.233 INFO FileInputFormat - Total input files to process : 1
18:46:58.258 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:244
18:46:58.258 INFO DAGScheduler - Got job 221 (collect at ReadsSparkSourceUnitTest.java:244) with 1 output partitions
18:46:58.258 INFO DAGScheduler - Final stage: ResultStage 285 (collect at ReadsSparkSourceUnitTest.java:244)
18:46:58.258 INFO DAGScheduler - Parents of final stage: List()
18:46:58.258 INFO DAGScheduler - Missing parents: List()
18:46:58.258 INFO DAGScheduler - Submitting ResultStage 285 (MapPartitionsRDD[1405] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:58.269 INFO MemoryStore - Block broadcast_591 stored as values in memory (estimated size 286.8 KiB, free 1919.0 MiB)
18:46:58.270 INFO MemoryStore - Block broadcast_591_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.9 MiB)
18:46:58.271 INFO BlockManagerInfo - Added broadcast_591_piece0 in memory on localhost:45727 (size: 103.6 KiB, free: 1919.8 MiB)
18:46:58.271 INFO SparkContext - Created broadcast 591 from broadcast at DAGScheduler.scala:1580
18:46:58.271 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 285 (MapPartitionsRDD[1405] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:58.271 INFO TaskSchedulerImpl - Adding task set 285.0 with 1 tasks resource profile 0
18:46:58.271 INFO TaskSetManager - Starting task 0.0 in stage 285.0 (TID 372) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7839 bytes)
18:46:58.272 INFO Executor - Running task 0.0 in stage 285.0 (TID 372)
18:46:58.292 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.cram:0+50619
18:46:58.299 INFO Executor - Finished task 0.0 in stage 285.0 (TID 372). 154101 bytes result sent to driver
18:46:58.300 INFO TaskSetManager - Finished task 0.0 in stage 285.0 (TID 372) in 29 ms on localhost (executor driver) (1/1)
18:46:58.300 INFO TaskSchedulerImpl - Removed TaskSet 285.0, whose tasks have all completed, from pool
18:46:58.300 INFO DAGScheduler - ResultStage 285 (collect at ReadsSparkSourceUnitTest.java:244) finished in 0.042 s
18:46:58.300 INFO DAGScheduler - Job 221 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:58.300 INFO TaskSchedulerImpl - Killing all running tasks in stage 285: Stage finished
18:46:58.301 INFO DAGScheduler - Job 221 finished: collect at ReadsSparkSourceUnitTest.java:244, took 0.042769 s
18:46:58.301 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
18:46:58.302 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.302 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.302 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.303 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.303 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram.crai dst=null perm=null proto=rpc
18:46:58.304 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.crai dst=null perm=null proto=rpc
18:46:58.304 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
18:46:58.304 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
18:46:58.305 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta.fai dst=null perm=null proto=rpc
18:46:58.305 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.fasta.fai dst=null perm=null proto=rpc
18:46:58.306 WARN DFSUtil - Unexpected value for data transfer bytes=23 duration=0
18:46:58.307 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.308 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.308 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram.crai dst=null perm=null proto=rpc
18:46:58.308 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.crai dst=null perm=null proto=rpc
18:46:58.309 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.309 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.309 WARN DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
18:46:58.310 INFO MemoryStore - Block broadcast_592 stored as values in memory (estimated size 472.0 B, free 1918.9 MiB)
18:46:58.310 INFO MemoryStore - Block broadcast_592_piece0 stored as bytes in memory (estimated size 129.0 B, free 1918.9 MiB)
18:46:58.310 INFO BlockManagerInfo - Added broadcast_592_piece0 in memory on localhost:45727 (size: 129.0 B, free: 1919.8 MiB)
18:46:58.311 INFO SparkContext - Created broadcast 592 from broadcast at CramSource.java:114
18:46:58.311 INFO MemoryStore - Block broadcast_593 stored as values in memory (estimated size 297.9 KiB, free 1918.6 MiB)
18:46:58.318 INFO MemoryStore - Block broadcast_593_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
18:46:58.318 INFO BlockManagerInfo - Added broadcast_593_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.8 MiB)
18:46:58.318 INFO SparkContext - Created broadcast 593 from newAPIHadoopFile at PathSplitSource.java:96
18:46:58.333 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.333 INFO FileInputFormat - Total input files to process : 1
18:46:58.333 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.358 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:245
18:46:58.358 INFO DAGScheduler - Got job 222 (collect at ReadsSparkSourceUnitTest.java:245) with 1 output partitions
18:46:58.358 INFO DAGScheduler - Final stage: ResultStage 286 (collect at ReadsSparkSourceUnitTest.java:245)
18:46:58.358 INFO DAGScheduler - Parents of final stage: List()
18:46:58.358 INFO DAGScheduler - Missing parents: List()
18:46:58.359 INFO DAGScheduler - Submitting ResultStage 286 (MapPartitionsRDD[1411] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:58.370 INFO MemoryStore - Block broadcast_594 stored as values in memory (estimated size 286.8 KiB, free 1918.3 MiB)
18:46:58.371 INFO MemoryStore - Block broadcast_594_piece0 stored as bytes in memory (estimated size 103.6 KiB, free 1918.2 MiB)
18:46:58.371 INFO BlockManagerInfo - Added broadcast_594_piece0 in memory on localhost:45727 (size: 103.6 KiB, free: 1919.7 MiB)
18:46:58.371 INFO SparkContext - Created broadcast 594 from broadcast at DAGScheduler.scala:1580
18:46:58.371 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 286 (MapPartitionsRDD[1411] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:58.371 INFO TaskSchedulerImpl - Adding task set 286.0 with 1 tasks resource profile 0
18:46:58.372 INFO TaskSetManager - Starting task 0.0 in stage 286.0 (TID 373) (localhost, executor driver, partition 0, ANY, 7797 bytes)
18:46:58.372 INFO Executor - Running task 0.0 in stage 286.0 (TID 373)
18:46:58.392 INFO NewHadoopRDD - Input split: hdfs://localhost:34389/user/runner/hdfs.cram:0+50619
18:46:58.393 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.393 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.cram dst=null perm=null proto=rpc
18:46:58.394 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.cram.crai dst=null perm=null proto=rpc
18:46:58.394 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.crai dst=null perm=null proto=rpc
18:46:58.395 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
18:46:58.395 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.fasta dst=null perm=null proto=rpc
18:46:58.395 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.fasta.fai dst=null perm=null proto=rpc
18:46:58.396 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.fasta.fai dst=null perm=null proto=rpc
18:46:58.397 WARN DFSUtil - Unexpected value for data transfer bytes=23 duration=0
18:46:58.398 WARN DFSUtil - Unexpected value for data transfer bytes=51015 duration=0
18:46:58.406 WARN DFSUtil - Unexpected value for data transfer bytes=364135 duration=0
18:46:58.407 WARN DFSUtil - Unexpected value for data transfer bytes=232039 duration=0
18:46:58.407 WARN DFSUtil - Unexpected value for data transfer bytes=99943 duration=0
18:46:58.411 INFO Executor - Finished task 0.0 in stage 286.0 (TID 373). 154101 bytes result sent to driver
18:46:58.412 INFO TaskSetManager - Finished task 0.0 in stage 286.0 (TID 373) in 40 ms on localhost (executor driver) (1/1)
18:46:58.412 INFO TaskSchedulerImpl - Removed TaskSet 286.0, whose tasks have all completed, from pool
18:46:58.412 INFO DAGScheduler - ResultStage 286 (collect at ReadsSparkSourceUnitTest.java:245) finished in 0.053 s
18:46:58.412 INFO DAGScheduler - Job 222 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:58.412 INFO TaskSchedulerImpl - Killing all running tasks in stage 286: Stage finished
18:46:58.413 INFO DAGScheduler - Job 222 finished: collect at ReadsSparkSourceUnitTest.java:245, took 0.054516 s
18:46:58.417 INFO MiniDFSCluster - Shutting down the Mini HDFS Cluster
18:46:58.417 INFO MiniDFSCluster - Shutting down DataNode 0
18:46:58.417 INFO DirectoryScanner - Shutdown has been called
18:46:58.418 INFO DataNode - Closing all peers.
18:46:58.418 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage10586223875571632427/data/data1, DS-86ba3c1b-c840-44a3-b780-d0a14742c5e3) exiting.
18:46:58.418 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage10586223875571632427/data/data2, DS-839b37db-fd6c-4e5c-9f3f-3701bfc12009) exiting.
18:46:58.421 INFO ContextHandler - Stopped o.e.j.w.WebAppContext@27ae4ddd{datanode,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
18:46:58.422 INFO AbstractConnector - Stopped ServerConnector@521d7df{HTTP/1.1, (http/1.1)}{localhost:0}
18:46:58.422 INFO session - node0 Stopped scavenging
18:46:58.422 INFO ContextHandler - Stopped o.e.j.s.ServletContextHandler@5748052d{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
18:46:58.423 INFO DataNode - Waiting up to 30 seconds for transfer threads to complete
18:46:58.423 INFO Server - Stopping server on 44793
18:46:58.424 INFO Server - Stopping IPC Server Responder
18:46:58.424 INFO Server - Stopping IPC Server listener on 0
18:46:58.425 ERROR DataNode - Command processor encountered interrupt and exit.
18:46:58.425 WARN DataNode - Ending command processor service for: Thread[Command processor,5,main]
18:46:58.425 WARN IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
18:46:58.425 WARN DataNode - Ending block pool service for: Block pool BP-2078786547-10.1.0.176-1747680417279 (Datanode Uuid 9bfc734e-24cf-44df-8af6-321e62d120ef) service to localhost/127.0.0.1:34389
18:46:58.425 INFO DataNode - Removed Block pool BP-2078786547-10.1.0.176-1747680417279 (Datanode Uuid 9bfc734e-24cf-44df-8af6-321e62d120ef)
18:46:58.425 INFO FsDatasetImpl - Removing block pool BP-2078786547-10.1.0.176-1747680417279
18:46:58.425 WARN IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
18:46:58.426 WARN CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
18:46:58.426 WARN CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
18:46:58.426 INFO FsDatasetAsyncDiskService - Shutting down all async disk service threads
18:46:58.426 INFO FsDatasetAsyncDiskService - All async disk service threads have been shut down
18:46:58.426 INFO RamDiskAsyncLazyPersistService - Shutting down all async lazy persist service threads
18:46:58.426 INFO RamDiskAsyncLazyPersistService - All async lazy persist service threads have been shut down
18:46:58.426 INFO DataNode - Shutdown complete.
18:46:58.426 INFO MiniDFSCluster - Shutting down the namenode
18:46:58.426 INFO FSNamesystem - Stopping services started for active state
18:46:58.426 INFO FSNamesystem - LazyPersistFileScrubber was interrupted, exiting
18:46:58.426 INFO FSEditLog - Ending log segment 1, 18
18:46:58.427 INFO FSNamesystem - NameNodeEditLogRoller was interrupted, exiting
18:46:58.427 INFO FSEditLog - Number of transactions: 19 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 4 Number of syncs: 16 SyncTimes(ms): 0 0
18:46:58.427 INFO FileJournalManager - Finalizing edits file /tmp/minicluster_storage10586223875571632427/name-0-1/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage10586223875571632427/name-0-1/current/edits_0000000000000000001-0000000000000000019
18:46:58.427 INFO FileJournalManager - Finalizing edits file /tmp/minicluster_storage10586223875571632427/name-0-2/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage10586223875571632427/name-0-2/current/edits_0000000000000000001-0000000000000000019
18:46:58.428 INFO FSEditLog - FSEditLogAsync was interrupted, exiting
18:46:58.428 INFO CacheReplicationMonitor - Shutting down CacheReplicationMonitor
18:46:58.428 INFO Server - Stopping server on 34389
18:46:58.428 INFO Server - Stopping IPC Server listener on 0
18:46:58.429 INFO Server - Stopping IPC Server Responder
18:46:58.429 INFO BlockManager - Stopping RedundancyMonitor.
18:46:58.430 INFO BlockManager - Stopping MarkedDeleteBlockScrubber.
18:46:58.435 INFO FSNamesystem - Stopping services started for active state
18:46:58.435 INFO FSNamesystem - Stopping services started for standby state
18:46:58.436 INFO ContextHandler - Stopped o.e.j.w.WebAppContext@325e164a{hdfs,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
18:46:58.437 INFO AbstractConnector - Stopped ServerConnector@330af3ff{HTTP/1.1, (http/1.1)}{localhost:0}
18:46:58.437 INFO session - node0 Stopped scavenging
18:46:58.437 INFO ContextHandler - Stopped o.e.j.s.ServletContextHandler@42d0ab29{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
18:46:58.438 INFO MetricsSystemImpl - Stopping DataNode metrics system...
18:46:58.438 INFO MetricsSystemImpl - DataNode metrics system stopped.
18:46:58.438 INFO MetricsSystemImpl - DataNode metrics system shutdown complete.
18:46:58.447 INFO MemoryStore - Block broadcast_595 stored as values in memory (estimated size 297.9 KiB, free 1917.9 MiB)
18:46:58.458 INFO MemoryStore - Block broadcast_595_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.9 MiB)
18:46:58.458 INFO BlockManagerInfo - Added broadcast_595_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.6 MiB)
18:46:58.458 INFO SparkContext - Created broadcast 595 from newAPIHadoopFile at PathSplitSource.java:96
18:46:58.465 INFO BlockManagerInfo - Removed broadcast_594_piece0 on localhost:45727 in memory (size: 103.6 KiB, free: 1919.7 MiB)
18:46:58.466 INFO BlockManagerInfo - Removed broadcast_588_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.8 MiB)
18:46:58.466 INFO BlockManagerInfo - Removed broadcast_590_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.8 MiB)
18:46:58.466 INFO BlockManagerInfo - Removed broadcast_592_piece0 on localhost:45727 in memory (size: 129.0 B, free: 1919.8 MiB)
18:46:58.467 INFO BlockManagerInfo - Removed broadcast_593_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.8 MiB)
18:46:58.467 INFO BlockManagerInfo - Removed broadcast_589_piece0 on localhost:45727 in memory (size: 187.0 B, free: 1919.8 MiB)
18:46:58.468 INFO BlockManagerInfo - Removed broadcast_591_piece0 on localhost:45727 in memory (size: 103.6 KiB, free: 1920.0 MiB)
18:46:58.468 INFO BlockManagerInfo - Removed broadcast_587_piece0 on localhost:45727 in memory (size: 129.0 B, free: 1920.0 MiB)
18:46:58.493 INFO MemoryStore - Block broadcast_596 stored as values in memory (estimated size 298.0 KiB, free 1919.4 MiB)
18:46:58.499 INFO MemoryStore - Block broadcast_596_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.3 MiB)
18:46:58.499 INFO BlockManagerInfo - Added broadcast_596_piece0 in memory on localhost:45727 (size: 50.3 KiB, free: 1919.9 MiB)
18:46:58.499 INFO SparkContext - Created broadcast 596 from newAPIHadoopFile at PathSplitSource.java:96
18:46:58.530 INFO BlockManagerInfo - Removed broadcast_596_piece0 on localhost:45727 in memory (size: 50.3 KiB, free: 1920.0 MiB)
18:46:58.531 INFO MemoryStore - Block broadcast_597 stored as values in memory (estimated size 298.0 KiB, free 1919.4 MiB)
18:46:58.537 INFO MemoryStore - Block broadcast_597_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.3 MiB)
18:46:58.537 INFO BlockManagerInfo - Added broadcast_597_piece0 in memory on localhost:45727 (size: 50.3 KiB, free: 1919.9 MiB)
18:46:58.538 INFO SparkContext - Created broadcast 597 from newAPIHadoopFile at PathSplitSource.java:96
18:46:58.558 INFO MemoryStore - Block broadcast_598 stored as values in memory (estimated size 536.0 B, free 1919.3 MiB)
18:46:58.559 INFO MemoryStore - Block broadcast_598_piece0 stored as bytes in memory (estimated size 187.0 B, free 1919.3 MiB)
18:46:58.559 INFO BlockManagerInfo - Added broadcast_598_piece0 in memory on localhost:45727 (size: 187.0 B, free: 1919.9 MiB)
18:46:58.559 INFO SparkContext - Created broadcast 598 from broadcast at CramSource.java:114
18:46:58.560 INFO MemoryStore - Block broadcast_599 stored as values in memory (estimated size 297.9 KiB, free 1919.0 MiB)
18:46:58.566 INFO MemoryStore - Block broadcast_599_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.0 MiB)
18:46:58.566 INFO BlockManagerInfo - Added broadcast_599_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.9 MiB)
18:46:58.567 INFO SparkContext - Created broadcast 599 from newAPIHadoopFile at PathSplitSource.java:96
18:46:58.581 INFO MemoryStore - Block broadcast_600 stored as values in memory (estimated size 608.0 B, free 1919.0 MiB)
18:46:58.582 INFO MemoryStore - Block broadcast_600_piece0 stored as bytes in memory (estimated size 206.0 B, free 1919.0 MiB)
18:46:58.582 INFO BlockManagerInfo - Added broadcast_600_piece0 in memory on localhost:45727 (size: 206.0 B, free: 1919.9 MiB)
18:46:58.582 INFO SparkContext - Created broadcast 600 from broadcast at CramSource.java:114
18:46:58.583 INFO MemoryStore - Block broadcast_601 stored as values in memory (estimated size 297.9 KiB, free 1918.7 MiB)
18:46:58.589 INFO MemoryStore - Block broadcast_601_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.6 MiB)
18:46:58.589 INFO BlockManagerInfo - Added broadcast_601_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.8 MiB)
18:46:58.590 INFO SparkContext - Created broadcast 601 from newAPIHadoopFile at PathSplitSource.java:96
18:46:58.605 INFO MemoryStore - Block broadcast_602 stored as values in memory (estimated size 298.0 KiB, free 1918.3 MiB)
18:46:58.611 INFO MemoryStore - Block broadcast_602_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.3 MiB)
18:46:58.611 INFO BlockManagerInfo - Added broadcast_602_piece0 in memory on localhost:45727 (size: 50.3 KiB, free: 1919.8 MiB)
18:46:58.611 INFO SparkContext - Created broadcast 602 from newAPIHadoopFile at PathSplitSource.java:96
18:46:58.632 INFO FileInputFormat - Total input files to process : 1
18:46:58.670 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:163
18:46:58.670 INFO DAGScheduler - Got job 223 (collect at ReadsSparkSourceUnitTest.java:163) with 1 output partitions
18:46:58.670 INFO DAGScheduler - Final stage: ResultStage 287 (collect at ReadsSparkSourceUnitTest.java:163)
18:46:58.670 INFO DAGScheduler - Parents of final stage: List()
18:46:58.670 INFO DAGScheduler - Missing parents: List()
18:46:58.670 INFO DAGScheduler - Submitting ResultStage 287 (MapPartitionsRDD[1441] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:58.692 INFO MemoryStore - Block broadcast_603 stored as values in memory (estimated size 426.2 KiB, free 1917.9 MiB)
18:46:58.694 INFO MemoryStore - Block broadcast_603_piece0 stored as bytes in memory (estimated size 153.6 KiB, free 1917.7 MiB)
18:46:58.694 INFO BlockManagerInfo - Added broadcast_603_piece0 in memory on localhost:45727 (size: 153.6 KiB, free: 1919.6 MiB)
18:46:58.694 INFO SparkContext - Created broadcast 603 from broadcast at DAGScheduler.scala:1580
18:46:58.694 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 287 (MapPartitionsRDD[1441] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:58.694 INFO TaskSchedulerImpl - Adding task set 287.0 with 1 tasks resource profile 0
18:46:58.694 INFO TaskSetManager - Starting task 0.0 in stage 287.0 (TID 374) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7883 bytes)
18:46:58.695 INFO Executor - Running task 0.0 in stage 287.0 (TID 374)
18:46:58.725 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/tools/BQSR/HiSeq.1mb.1RG.2k_lines.alternate.bam:0+221635
18:46:58.734 INFO Executor - Finished task 0.0 in stage 287.0 (TID 374). 650141 bytes result sent to driver
18:46:58.736 INFO TaskSetManager - Finished task 0.0 in stage 287.0 (TID 374) in 42 ms on localhost (executor driver) (1/1)
18:46:58.736 INFO TaskSchedulerImpl - Removed TaskSet 287.0, whose tasks have all completed, from pool
18:46:58.736 INFO DAGScheduler - ResultStage 287 (collect at ReadsSparkSourceUnitTest.java:163) finished in 0.066 s
18:46:58.736 INFO DAGScheduler - Job 223 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:58.736 INFO TaskSchedulerImpl - Killing all running tasks in stage 287: Stage finished
18:46:58.736 INFO DAGScheduler - Job 223 finished: collect at ReadsSparkSourceUnitTest.java:163, took 0.066587 s
18:46:58.739 INFO MemoryStore - Block broadcast_604 stored as values in memory (estimated size 208.0 B, free 1917.7 MiB)
18:46:58.742 INFO MemoryStore - Block broadcast_604_piece0 stored as bytes in memory (estimated size 222.0 B, free 1917.7 MiB)
18:46:58.742 INFO BlockManagerInfo - Added broadcast_604_piece0 in memory on localhost:45727 (size: 222.0 B, free: 1919.6 MiB)
18:46:58.742 INFO SparkContext - Created broadcast 604 from broadcast at AbstractBinarySamSource.java:82
18:46:58.743 INFO MemoryStore - Block broadcast_605 stored as values in memory (estimated size 297.9 KiB, free 1917.4 MiB)
18:46:58.754 INFO MemoryStore - Block broadcast_605_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1917.4 MiB)
18:46:58.754 INFO BlockManagerInfo - Added broadcast_605_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.6 MiB)
18:46:58.754 INFO SparkContext - Created broadcast 605 from newAPIHadoopFile at PathSplitSource.java:96
18:46:58.778 INFO FileInputFormat - Total input files to process : 1
18:46:58.781 INFO SparkContext - Starting job: count at ReadsSparkSourceUnitTest.java:265
18:46:58.782 INFO DAGScheduler - Got job 224 (count at ReadsSparkSourceUnitTest.java:265) with 1 output partitions
18:46:58.782 INFO DAGScheduler - Final stage: ResultStage 288 (count at ReadsSparkSourceUnitTest.java:265)
18:46:58.782 INFO DAGScheduler - Parents of final stage: List()
18:46:58.782 INFO DAGScheduler - Missing parents: List()
18:46:58.782 INFO DAGScheduler - Submitting ResultStage 288 (MapPartitionsRDD[1448] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:58.798 INFO MemoryStore - Block broadcast_606 stored as values in memory (estimated size 426.2 KiB, free 1917.0 MiB)
18:46:58.800 INFO MemoryStore - Block broadcast_606_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1916.8 MiB)
18:46:58.800 INFO BlockManagerInfo - Added broadcast_606_piece0 in memory on localhost:45727 (size: 153.7 KiB, free: 1919.4 MiB)
18:46:58.800 INFO SparkContext - Created broadcast 606 from broadcast at DAGScheduler.scala:1580
18:46:58.800 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 288 (MapPartitionsRDD[1448] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:58.800 INFO TaskSchedulerImpl - Adding task set 288.0 with 1 tasks resource profile 0
18:46:58.801 INFO TaskSetManager - Starting task 0.0 in stage 288.0 (TID 375) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7838 bytes)
18:46:58.801 INFO Executor - Running task 0.0 in stage 288.0 (TID 375)
18:46:58.832 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/NA12878.chr17_69k_70k.dictFix.bam:0+57538
18:46:58.836 INFO Executor - Finished task 0.0 in stage 288.0 (TID 375). 989 bytes result sent to driver
18:46:58.836 INFO TaskSetManager - Finished task 0.0 in stage 288.0 (TID 375) in 35 ms on localhost (executor driver) (1/1)
18:46:58.836 INFO TaskSchedulerImpl - Removed TaskSet 288.0, whose tasks have all completed, from pool
18:46:58.836 INFO DAGScheduler - ResultStage 288 (count at ReadsSparkSourceUnitTest.java:265) finished in 0.054 s
18:46:58.836 INFO DAGScheduler - Job 224 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:58.836 INFO TaskSchedulerImpl - Killing all running tasks in stage 288: Stage finished
18:46:58.837 INFO DAGScheduler - Job 224 finished: count at ReadsSparkSourceUnitTest.java:265, took 0.055172 s
18:46:58.838 INFO MemoryStore - Block broadcast_607 stored as values in memory (estimated size 160.0 B, free 1916.8 MiB)
18:46:58.842 INFO MemoryStore - Block broadcast_607_piece0 stored as bytes in memory (estimated size 212.0 B, free 1916.8 MiB)
18:46:58.842 INFO BlockManagerInfo - Added broadcast_607_piece0 in memory on localhost:45727 (size: 212.0 B, free: 1919.4 MiB)
18:46:58.842 INFO BlockManagerInfo - Removed broadcast_602_piece0 on localhost:45727 in memory (size: 50.3 KiB, free: 1919.5 MiB)
18:46:58.842 INFO SparkContext - Created broadcast 607 from broadcast at AbstractBinarySamSource.java:82
18:46:58.843 INFO BlockManagerInfo - Removed broadcast_597_piece0 on localhost:45727 in memory (size: 50.3 KiB, free: 1919.5 MiB)
18:46:58.843 INFO BlockManagerInfo - Removed broadcast_599_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.6 MiB)
18:46:58.843 INFO BlockManagerInfo - Removed broadcast_598_piece0 on localhost:45727 in memory (size: 187.0 B, free: 1919.6 MiB)
18:46:58.844 INFO BlockManagerInfo - Removed broadcast_601_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.6 MiB)
18:46:58.844 INFO MemoryStore - Block broadcast_608 stored as values in memory (estimated size 298.0 KiB, free 1917.9 MiB)
18:46:58.844 INFO BlockManagerInfo - Removed broadcast_600_piece0 on localhost:45727 in memory (size: 206.0 B, free: 1919.6 MiB)
18:46:58.844 INFO BlockManagerInfo - Removed broadcast_604_piece0 on localhost:45727 in memory (size: 222.0 B, free: 1919.6 MiB)
18:46:58.845 INFO BlockManagerInfo - Removed broadcast_603_piece0 on localhost:45727 in memory (size: 153.6 KiB, free: 1919.8 MiB)
18:46:58.845 INFO BlockManagerInfo - Removed broadcast_605_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.8 MiB)
18:46:58.845 INFO BlockManagerInfo - Removed broadcast_606_piece0 on localhost:45727 in memory (size: 153.7 KiB, free: 1920.0 MiB)
18:46:58.850 INFO MemoryStore - Block broadcast_608_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1919.3 MiB)
18:46:58.850 INFO BlockManagerInfo - Added broadcast_608_piece0 in memory on localhost:45727 (size: 50.3 KiB, free: 1919.9 MiB)
18:46:58.851 INFO SparkContext - Created broadcast 608 from newAPIHadoopFile at PathSplitSource.java:96
18:46:58.871 INFO FileInputFormat - Total input files to process : 1
18:46:58.874 INFO SparkContext - Starting job: count at ReadsSparkSourceUnitTest.java:286
18:46:58.875 INFO DAGScheduler - Got job 225 (count at ReadsSparkSourceUnitTest.java:286) with 1 output partitions
18:46:58.875 INFO DAGScheduler - Final stage: ResultStage 289 (count at ReadsSparkSourceUnitTest.java:286)
18:46:58.875 INFO DAGScheduler - Parents of final stage: List()
18:46:58.875 INFO DAGScheduler - Missing parents: List()
18:46:58.875 INFO DAGScheduler - Submitting ResultStage 289 (MapPartitionsRDD[1455] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:58.891 INFO MemoryStore - Block broadcast_609 stored as values in memory (estimated size 426.2 KiB, free 1918.9 MiB)
18:46:58.893 INFO MemoryStore - Block broadcast_609_piece0 stored as bytes in memory (estimated size 153.7 KiB, free 1918.8 MiB)
18:46:58.893 INFO BlockManagerInfo - Added broadcast_609_piece0 in memory on localhost:45727 (size: 153.7 KiB, free: 1919.8 MiB)
18:46:58.893 INFO SparkContext - Created broadcast 609 from broadcast at DAGScheduler.scala:1580
18:46:58.893 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 289 (MapPartitionsRDD[1455] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:58.893 INFO TaskSchedulerImpl - Adding task set 289.0 with 1 tasks resource profile 0
18:46:58.893 INFO TaskSetManager - Starting task 0.0 in stage 289.0 (TID 376) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7901 bytes)
18:46:58.894 INFO Executor - Running task 0.0 in stage 289.0 (TID 376)
18:46:58.923 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/engine/CEUTrio.HiSeq.WGS.b37.NA12878.snippet_with_unmapped.bam:0+2184
18:46:58.927 INFO Executor - Finished task 0.0 in stage 289.0 (TID 376). 989 bytes result sent to driver
18:46:58.927 INFO TaskSetManager - Finished task 0.0 in stage 289.0 (TID 376) in 34 ms on localhost (executor driver) (1/1)
18:46:58.928 INFO TaskSchedulerImpl - Removed TaskSet 289.0, whose tasks have all completed, from pool
18:46:58.928 INFO DAGScheduler - ResultStage 289 (count at ReadsSparkSourceUnitTest.java:286) finished in 0.053 s
18:46:58.928 INFO DAGScheduler - Job 225 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:58.928 INFO TaskSchedulerImpl - Killing all running tasks in stage 289: Stage finished
18:46:58.928 INFO DAGScheduler - Job 225 finished: count at ReadsSparkSourceUnitTest.java:286, took 0.053481 s
18:46:58.931 INFO MemoryStore - Block broadcast_610 stored as values in memory (estimated size 298.0 KiB, free 1918.5 MiB)
18:46:58.941 INFO MemoryStore - Block broadcast_610_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.4 MiB)
18:46:58.941 INFO BlockManagerInfo - Added broadcast_610_piece0 in memory on localhost:45727 (size: 50.3 KiB, free: 1919.7 MiB)
18:46:58.941 INFO SparkContext - Created broadcast 610 from newAPIHadoopFile at PathSplitSource.java:96
18:46:58.967 INFO MemoryStore - Block broadcast_611 stored as values in memory (estimated size 298.2 KiB, free 1918.1 MiB)
18:46:58.973 INFO MemoryStore - Block broadcast_611_piece0 stored as bytes in memory (estimated size 50.3 KiB, free 1918.1 MiB)
18:46:58.973 INFO BlockManagerInfo - Added broadcast_611_piece0 in memory on localhost:45727 (size: 50.3 KiB, free: 1919.7 MiB)
18:46:58.973 INFO SparkContext - Created broadcast 611 from newAPIHadoopFile at PathSplitSource.java:96
18:46:58.993 INFO FileInputFormat - Total input files to process : 1
18:46:58.993 INFO FileInputFormat - Total input files to process : 1
18:46:59.002 INFO MiniDFSCluster - starting cluster: numNameNodes=1, numDataNodes=1
18:46:59.003 INFO NameNode - Formatting using clusterid: testClusterID
18:46:59.003 INFO FSEditLog - Edit logging is async:true
18:46:59.010 INFO FSNamesystem - KeyProvider: null
18:46:59.010 INFO FSNamesystem - fsLock is fair: true
18:46:59.010 INFO FSNamesystem - Detailed lock hold time metrics enabled: false
18:46:59.010 INFO FSNamesystem - fsOwner = runner (auth:SIMPLE)
18:46:59.010 INFO FSNamesystem - supergroup = supergroup
18:46:59.010 INFO FSNamesystem - isPermissionEnabled = true
18:46:59.010 INFO FSNamesystem - isStoragePolicyEnabled = true
18:46:59.010 INFO FSNamesystem - HA Enabled: false
18:46:59.010 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
18:46:59.010 INFO DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
18:46:59.010 INFO DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
18:46:59.010 INFO BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
18:46:59.011 INFO BlockManager - The block deletion will start around 2025 May 19 18:46:59
18:46:59.011 INFO GSet - Computing capacity for map BlocksMap
18:46:59.011 INFO GSet - VM type = 64-bit
18:46:59.011 INFO GSet - 2.0% max memory 3.4 GB = 70 MB
18:46:59.011 INFO GSet - capacity = 2^23 = 8388608 entries
18:46:59.014 INFO BlockManagerInfo - Removed broadcast_610_piece0 on localhost:45727 in memory (size: 50.3 KiB, free: 1919.7 MiB)
18:46:59.014 INFO BlockManagerInfo - Removed broadcast_611_piece0 on localhost:45727 in memory (size: 50.3 KiB, free: 1919.8 MiB)
18:46:59.015 INFO BlockManagerInfo - Removed broadcast_607_piece0 on localhost:45727 in memory (size: 212.0 B, free: 1919.8 MiB)
18:46:59.015 INFO BlockManagerInfo - Removed broadcast_608_piece0 on localhost:45727 in memory (size: 50.3 KiB, free: 1919.8 MiB)
18:46:59.015 INFO BlockManager - Storage policy satisfier is disabled
18:46:59.015 INFO BlockManager - dfs.block.access.token.enable = false
18:46:59.015 INFO BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
18:46:59.015 INFO BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
18:46:59.015 INFO BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
18:46:59.015 INFO BlockManagerInfo - Removed broadcast_609_piece0 on localhost:45727 in memory (size: 153.7 KiB, free: 1920.0 MiB)
18:46:59.015 INFO BlockManager - defaultReplication = 1
18:46:59.015 INFO BlockManager - maxReplication = 512
18:46:59.015 INFO BlockManager - minReplication = 1
18:46:59.015 INFO BlockManager - maxReplicationStreams = 2
18:46:59.015 INFO BlockManager - redundancyRecheckInterval = 3000ms
18:46:59.015 INFO BlockManager - encryptDataTransfer = false
18:46:59.015 INFO BlockManager - maxNumBlocksToLog = 1000
18:46:59.015 INFO GSet - Computing capacity for map INodeMap
18:46:59.015 INFO GSet - VM type = 64-bit
18:46:59.016 INFO GSet - 1.0% max memory 3.4 GB = 35 MB
18:46:59.016 INFO GSet - capacity = 2^22 = 4194304 entries
18:46:59.016 INFO FSDirectory - ACLs enabled? true
18:46:59.016 INFO FSDirectory - POSIX ACL inheritance enabled? true
18:46:59.016 INFO FSDirectory - XAttrs enabled? true
18:46:59.016 INFO NameNode - Caching file names occurring more than 10 times
18:46:59.016 INFO SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
18:46:59.016 INFO SnapshotManager - SkipList is disabled
18:46:59.016 INFO GSet - Computing capacity for map cachedBlocks
18:46:59.016 INFO GSet - VM type = 64-bit
18:46:59.017 INFO GSet - 0.25% max memory 3.4 GB = 8.8 MB
18:46:59.017 INFO GSet - capacity = 2^20 = 1048576 entries
18:46:59.017 INFO TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
18:46:59.017 INFO TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
18:46:59.017 INFO TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
18:46:59.017 INFO FSNamesystem - Retry cache on namenode is enabled
18:46:59.017 INFO FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18:46:59.017 INFO GSet - Computing capacity for map NameNodeRetryCache
18:46:59.017 INFO GSet - VM type = 64-bit
18:46:59.017 INFO GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
18:46:59.017 INFO GSet - capacity = 2^17 = 131072 entries
18:46:59.017 INFO FSImage - Allocated new BlockPoolId: BP-826704803-10.1.0.176-1747680419017
18:46:59.019 INFO Storage - Storage directory /tmp/minicluster_storage7250618181682918930/name-0-1 has been successfully formatted.
18:46:59.021 INFO Storage - Storage directory /tmp/minicluster_storage7250618181682918930/name-0-2 has been successfully formatted.
18:46:59.030 INFO FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage7250618181682918930/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression
18:46:59.030 INFO FSImageFormatProtobuf - Saving image file /tmp/minicluster_storage7250618181682918930/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression
18:46:59.033 INFO FSImageFormatProtobuf - Image file /tmp/minicluster_storage7250618181682918930/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
18:46:59.033 INFO FSImageFormatProtobuf - Image file /tmp/minicluster_storage7250618181682918930/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 419 bytes saved in 0 seconds .
18:46:59.034 INFO NNStorageRetentionManager - Going to retain 1 images with txid >= 0
18:46:59.038 INFO FSNamesystem - Stopping services started for active state
18:46:59.038 INFO FSNamesystem - Stopping services started for standby state
18:46:59.038 INFO NameNode - createNameNode []
18:46:59.039 WARN MetricsConfig - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
18:46:59.040 INFO MetricsSystemImpl - Scheduled Metric snapshot period at 10 second(s).
18:46:59.040 INFO MetricsSystemImpl - NameNode metrics system started
18:46:59.040 INFO NameNodeUtils - fs.defaultFS is hdfs://127.0.0.1:0
18:46:59.050 INFO JvmPauseMonitor - Starting JVM pause monitor
18:46:59.051 INFO DFSUtil - Filter initializers set : org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
18:46:59.051 INFO DFSUtil - Starting Web-server for hdfs at: http://localhost:0
18:46:59.052 WARN AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
18:46:59.052 WARN HttpRequestLog - Jetty request log can only be enabled using Log4j
18:46:59.053 INFO HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
18:46:59.053 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
18:46:59.053 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
18:46:59.054 INFO HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context hdfs
18:46:59.054 INFO HttpServer2 - Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context static
18:46:59.054 INFO HttpServer2 - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
18:46:59.054 INFO HttpServer2 - Jetty bound to port 34055
18:46:59.054 INFO Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
18:46:59.055 INFO session - DefaultSessionIdManager workerName=node0
18:46:59.055 INFO session - No SessionScavenger set, using defaults
18:46:59.055 INFO session - node0 Scavenging every 600000ms
18:46:59.056 WARN AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
18:46:59.056 INFO ContextHandler - Started o.e.j.s.ServletContextHandler@7407565b{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
18:46:59.143 INFO ContextHandler - Started o.e.j.w.WebAppContext@26c29fc5{hdfs,/,file:///tmp/jetty-localhost-34055-hadoop-hdfs-3_3_6-tests_jar-_-any-11522980780617049529/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
18:46:59.143 INFO AbstractConnector - Started ServerConnector@4411d442{HTTP/1.1, (http/1.1)}{localhost:34055}
18:46:59.143 INFO Server - Started @77644ms
18:46:59.143 INFO FSEditLog - Edit logging is async:true
18:46:59.152 INFO FSNamesystem - KeyProvider: null
18:46:59.152 INFO FSNamesystem - fsLock is fair: true
18:46:59.152 INFO FSNamesystem - Detailed lock hold time metrics enabled: false
18:46:59.152 INFO FSNamesystem - fsOwner = runner (auth:SIMPLE)
18:46:59.152 INFO FSNamesystem - supergroup = supergroup
18:46:59.152 INFO FSNamesystem - isPermissionEnabled = true
18:46:59.152 INFO FSNamesystem - isStoragePolicyEnabled = true
18:46:59.152 INFO FSNamesystem - HA Enabled: false
18:46:59.152 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
18:46:59.152 INFO DatanodeManager - dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000
18:46:59.152 INFO DatanodeManager - dfs.namenode.datanode.registration.ip-hostname-check=true
18:46:59.152 INFO BlockManager - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
18:46:59.152 INFO BlockManager - The block deletion will start around 2025 May 19 18:46:59
18:46:59.152 INFO GSet - Computing capacity for map BlocksMap
18:46:59.152 INFO GSet - VM type = 64-bit
18:46:59.152 INFO GSet - 2.0% max memory 3.4 GB = 70 MB
18:46:59.152 INFO GSet - capacity = 2^23 = 8388608 entries
18:46:59.157 INFO BlockManager - Storage policy satisfier is disabled
18:46:59.157 INFO BlockManager - dfs.block.access.token.enable = false
18:46:59.157 INFO BlockManagerSafeMode - dfs.namenode.safemode.threshold-pct = 0.999
18:46:59.157 INFO BlockManagerSafeMode - dfs.namenode.safemode.min.datanodes = 0
18:46:59.157 INFO BlockManagerSafeMode - dfs.namenode.safemode.extension = 0
18:46:59.157 INFO BlockManager - defaultReplication = 1
18:46:59.157 INFO BlockManager - maxReplication = 512
18:46:59.157 INFO BlockManager - minReplication = 1
18:46:59.157 INFO BlockManager - maxReplicationStreams = 2
18:46:59.157 INFO BlockManager - redundancyRecheckInterval = 3000ms
18:46:59.157 INFO BlockManager - encryptDataTransfer = false
18:46:59.157 INFO BlockManager - maxNumBlocksToLog = 1000
18:46:59.157 INFO GSet - Computing capacity for map INodeMap
18:46:59.157 INFO GSet - VM type = 64-bit
18:46:59.157 INFO GSet - 1.0% max memory 3.4 GB = 35 MB
18:46:59.157 INFO GSet - capacity = 2^22 = 4194304 entries
18:46:59.161 INFO FSDirectory - ACLs enabled? true
18:46:59.161 INFO FSDirectory - POSIX ACL inheritance enabled? true
18:46:59.161 INFO FSDirectory - XAttrs enabled? true
18:46:59.161 INFO NameNode - Caching file names occurring more than 10 times
18:46:59.161 INFO SnapshotManager - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
18:46:59.161 INFO SnapshotManager - SkipList is disabled
18:46:59.161 INFO GSet - Computing capacity for map cachedBlocks
18:46:59.161 INFO GSet - VM type = 64-bit
18:46:59.161 INFO GSet - 0.25% max memory 3.4 GB = 8.8 MB
18:46:59.161 INFO GSet - capacity = 2^20 = 1048576 entries
18:46:59.161 INFO TopMetrics - NNTop conf: dfs.namenode.top.window.num.buckets = 10
18:46:59.161 INFO TopMetrics - NNTop conf: dfs.namenode.top.num.users = 10
18:46:59.161 INFO TopMetrics - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
18:46:59.161 INFO FSNamesystem - Retry cache on namenode is enabled
18:46:59.161 INFO FSNamesystem - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18:46:59.161 INFO GSet - Computing capacity for map NameNodeRetryCache
18:46:59.161 INFO GSet - VM type = 64-bit
18:46:59.161 INFO GSet - 0.029999999329447746% max memory 3.4 GB = 1.0 MB
18:46:59.161 INFO GSet - capacity = 2^17 = 131072 entries
18:46:59.163 INFO Storage - Lock on /tmp/minicluster_storage7250618181682918930/name-0-1/in_use.lock acquired by nodename 3117@pkrvmf6wy0o8zjz
18:46:59.164 INFO Storage - Lock on /tmp/minicluster_storage7250618181682918930/name-0-2/in_use.lock acquired by nodename 3117@pkrvmf6wy0o8zjz
18:46:59.164 INFO FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage7250618181682918930/name-0-1/current
18:46:59.165 INFO FileJournalManager - Recovering unfinalized segments in /tmp/minicluster_storage7250618181682918930/name-0-2/current
18:46:59.165 INFO FSImage - No edit log streams selected.
18:46:59.165 INFO FSImage - Planning to load image: FSImageFile(file=/tmp/minicluster_storage7250618181682918930/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
18:46:59.165 INFO FSImageFormatPBINode - Loading 1 INodes.
18:46:59.165 INFO FSImageFormatPBINode - Successfully loaded 1 inodes
18:46:59.165 INFO FSImageFormatPBINode - Completed update blocks map and name cache, total waiting duration 0ms.
18:46:59.166 INFO FSImageFormatProtobuf - Loaded FSImage in 0 seconds.
18:46:59.166 INFO FSImage - Loaded image for txid 0 from /tmp/minicluster_storage7250618181682918930/name-0-1/current/fsimage_0000000000000000000
18:46:59.166 INFO FSNamesystem - Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
18:46:59.166 INFO FSEditLog - Starting log segment at 1
18:46:59.169 INFO NameCache - initialized with 0 entries 0 lookups
18:46:59.169 INFO FSNamesystem - Finished loading FSImage in 8 msecs
18:46:59.169 INFO NameNode - RPC server is binding to localhost:0
18:46:59.169 INFO NameNode - Enable NameNode state context:false
18:46:59.170 INFO CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
18:46:59.170 INFO Server - Listener at localhost:34375
18:46:59.170 INFO Server - Starting Socket Reader #1 for port 0
18:46:59.171 INFO NameNode - Clients are to use localhost:34375 to access this namenode/service.
18:46:59.172 INFO FSNamesystem - Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans.
18:46:59.182 INFO LeaseManager - Number of blocks under construction: 0
18:46:59.182 INFO DatanodeAdminDefaultMonitor - Initialized the Default Decommission and Maintenance monitor
18:46:59.183 INFO BlockManager - Start MarkedDeleteBlockScrubber thread
18:46:59.183 INFO BlockManager - initializing replication queues
18:46:59.183 INFO StateChange - STATE* Leaving safe mode after 0 secs
18:46:59.183 INFO StateChange - STATE* Network topology has 0 racks and 0 datanodes
18:46:59.183 INFO StateChange - STATE* UnderReplicatedBlocks has 0 blocks
18:46:59.185 INFO Server - IPC Server Responder: starting
18:46:59.186 INFO Server - IPC Server listener on 0: starting
18:46:59.187 INFO NameNode - NameNode RPC up at: localhost/127.0.0.1:34375
18:46:59.188 WARN MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
18:46:59.188 INFO BlockManager - Total number of blocks = 0
18:46:59.188 INFO BlockManager - Number of invalid blocks = 0
18:46:59.188 INFO BlockManager - Number of under-replicated blocks = 0
18:46:59.188 INFO BlockManager - Number of over-replicated blocks = 0
18:46:59.188 INFO BlockManager - Number of blocks being written = 0
18:46:59.188 INFO StateChange - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 4 msec
18:46:59.188 INFO FSNamesystem - Starting services required for active state
18:46:59.188 INFO FSDirectory - Initializing quota with 12 thread(s)
18:46:59.188 INFO FSDirectory - Quota initialization completed in 1 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
18:46:59.189 INFO CacheReplicationMonitor - Starting CacheReplicationMonitor with interval 30000 milliseconds
18:46:59.190 INFO MiniDFSCluster - Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/tmp/minicluster_storage7250618181682918930/data/data1,[DISK]file:/tmp/minicluster_storage7250618181682918930/data/data2
18:46:59.190 INFO ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage7250618181682918930/data/data1
18:46:59.191 INFO ThrottledAsyncChecker - Scheduling a check for [DISK]file:/tmp/minicluster_storage7250618181682918930/data/data2
18:46:59.201 INFO MetricsSystemImpl - DataNode metrics system started (again)
18:46:59.201 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
18:46:59.201 INFO BlockScanner - Initialized block scanner with targetBytesPerSec 1048576
18:46:59.201 INFO DataNode - Configured hostname is 127.0.0.1
18:46:59.202 INFO Util - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
18:46:59.202 INFO DataNode - Starting DataNode with maxLockedMemory = 0
18:46:59.202 INFO DataNode - Opened streaming server at /127.0.0.1:37235
18:46:59.202 INFO DataNode - Balancing bandwidth is 104857600 bytes/s
18:46:59.202 INFO DataNode - Number threads for balancing is 100
18:46:59.205 WARN AuthenticationFilter - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /home/runner/hadoop-http-auth-signature-secret
18:46:59.205 WARN HttpRequestLog - Jetty request log can only be enabled using Log4j
18:46:59.206 INFO HttpServer2 - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
18:46:59.206 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
18:46:59.206 INFO HttpServer2 - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
18:46:59.207 INFO HttpServer2 - Jetty bound to port 38669
18:46:59.207 INFO Server - jetty-9.4.56.v20240826; built: 2024-08-26T17:15:05.868Z; git: ec6782ff5ead824dabdcf47fa98f90a4aedff401; jvm 17.0.6+10
18:46:59.208 INFO session - DefaultSessionIdManager workerName=node0
18:46:59.209 INFO session - No SessionScavenger set, using defaults
18:46:59.209 INFO session - node0 Scavenging every 600000ms
18:46:59.209 INFO ContextHandler - Started o.e.j.s.ServletContextHandler@38bf2698{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,AVAILABLE}
18:46:59.281 INFO BlockManagerInfo - Removed broadcast_595_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1920.0 MiB)
18:46:59.316 INFO ContextHandler - Started o.e.j.w.WebAppContext@592c6916{datanode,/,file:///tmp/jetty-localhost-38669-hadoop-hdfs-3_3_6-tests_jar-_-any-15407868698993424511/webapp/,AVAILABLE}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
18:46:59.317 INFO AbstractConnector - Started ServerConnector@18919f2c{HTTP/1.1, (http/1.1)}{localhost:38669}
18:46:59.317 INFO Server - Started @77818ms
18:46:59.318 WARN DatanodeHttpServer - Got null for restCsrfPreventionFilter - will not do any filtering.
18:46:59.319 INFO DatanodeHttpServer - Listening HTTP traffic on /127.0.0.1:41691
18:46:59.319 INFO DataNode - dnUserName = runner
18:46:59.319 INFO DataNode - supergroup = supergroup
18:46:59.319 INFO JvmPauseMonitor - Starting JVM pause monitor
18:46:59.319 INFO CallQueueManager - Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
18:46:59.320 INFO Server - Listener at localhost:39955
18:46:59.320 INFO Server - Starting Socket Reader #1 for port 0
18:46:59.322 INFO DataNode - Opened IPC server at /127.0.0.1:39955
18:46:59.336 INFO DataNode - Refresh request received for nameservices: null
18:46:59.336 INFO DataNode - Starting BPOfferServices for nameservices: <default>
18:46:59.336 WARN MetricsLoggerTask - Metrics logging will not be async since the logger is not log4j
18:46:59.337 INFO DataNode - Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:34375 starting to offer service
18:46:59.337 INFO Server - IPC Server Responder: starting
18:46:59.337 INFO Server - IPC Server listener on 0: starting
18:46:59.338 INFO DataNode - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:34375
18:46:59.339 INFO Storage - Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2)
18:46:59.340 INFO Storage - Lock on /tmp/minicluster_storage7250618181682918930/data/data1/in_use.lock acquired by nodename 3117@pkrvmf6wy0o8zjz
18:46:59.340 INFO Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage7250618181682918930/data/data1 is not formatted for namespace 402021288. Formatting...
18:46:59.340 INFO Storage - Generated new storageID DS-5407b9ec-75b9-4e05-9f65-375f5cbd35cc for directory /tmp/minicluster_storage7250618181682918930/data/data1
18:46:59.341 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=datanodeReport src=null dst=null perm=null proto=rpc
18:46:59.342 INFO MiniDFSCluster - dnInfo.length != numDataNodes
18:46:59.342 INFO MiniDFSCluster - Waiting for cluster to become active
18:46:59.343 INFO Storage - Lock on /tmp/minicluster_storage7250618181682918930/data/data2/in_use.lock acquired by nodename 3117@pkrvmf6wy0o8zjz
18:46:59.343 INFO Storage - Storage directory with location [DISK]file:/tmp/minicluster_storage7250618181682918930/data/data2 is not formatted for namespace 402021288. Formatting...
18:46:59.343 INFO Storage - Generated new storageID DS-b3060a69-54fd-40d7-96ee-f98ec9db0f74 for directory /tmp/minicluster_storage7250618181682918930/data/data2
18:46:59.356 INFO Storage - Analyzing storage directories for bpid BP-826704803-10.1.0.176-1747680419017
18:46:59.356 INFO Storage - Locking is disabled for /tmp/minicluster_storage7250618181682918930/data/data1/current/BP-826704803-10.1.0.176-1747680419017
18:46:59.356 INFO Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage7250618181682918930/data/data1 and block pool id BP-826704803-10.1.0.176-1747680419017 is not formatted. Formatting ...
18:46:59.356 INFO Storage - Formatting block pool BP-826704803-10.1.0.176-1747680419017 directory /tmp/minicluster_storage7250618181682918930/data/data1/current/BP-826704803-10.1.0.176-1747680419017/current
18:46:59.368 INFO Storage - Analyzing storage directories for bpid BP-826704803-10.1.0.176-1747680419017
18:46:59.368 INFO Storage - Locking is disabled for /tmp/minicluster_storage7250618181682918930/data/data2/current/BP-826704803-10.1.0.176-1747680419017
18:46:59.368 INFO Storage - Block pool storage directory for location [DISK]file:/tmp/minicluster_storage7250618181682918930/data/data2 and block pool id BP-826704803-10.1.0.176-1747680419017 is not formatted. Formatting ...
18:46:59.368 INFO Storage - Formatting block pool BP-826704803-10.1.0.176-1747680419017 directory /tmp/minicluster_storage7250618181682918930/data/data2/current/BP-826704803-10.1.0.176-1747680419017/current
18:46:59.370 INFO DataNode - Setting up storage: nsid=402021288;bpid=BP-826704803-10.1.0.176-1747680419017;lv=-57;nsInfo=lv=-66;cid=testClusterID;nsid=402021288;c=1747680419017;bpid=BP-826704803-10.1.0.176-1747680419017;dnuuid=null
18:46:59.372 INFO DataNode - Generated and persisted new Datanode UUID 0e759092-4c45-4a83-91c4-0eaec324bb71
18:46:59.372 INFO FsDatasetImpl - The datanode lock is a read write lock
18:46:59.373 INFO FsDatasetImpl - Added new volume: DS-5407b9ec-75b9-4e05-9f65-375f5cbd35cc
18:46:59.373 INFO FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage7250618181682918930/data/data1, StorageType: DISK
18:46:59.373 INFO FsDatasetImpl - Added new volume: DS-b3060a69-54fd-40d7-96ee-f98ec9db0f74
18:46:59.373 INFO FsDatasetImpl - Added volume - [DISK]file:/tmp/minicluster_storage7250618181682918930/data/data2, StorageType: DISK
18:46:59.373 INFO MemoryMappableBlockLoader - Initializing cache loader: MemoryMappableBlockLoader.
18:46:59.374 INFO FsDatasetImpl - Registered FSDatasetState MBean
18:46:59.374 INFO FsDatasetImpl - Adding block pool BP-826704803-10.1.0.176-1747680419017
18:46:59.374 INFO FsDatasetImpl - Scanning block pool BP-826704803-10.1.0.176-1747680419017 on volume /tmp/minicluster_storage7250618181682918930/data/data1...
18:46:59.374 INFO FsDatasetImpl - Scanning block pool BP-826704803-10.1.0.176-1747680419017 on volume /tmp/minicluster_storage7250618181682918930/data/data2...
18:46:59.374 WARN FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage7250618181682918930/data/data1/current/BP-826704803-10.1.0.176-1747680419017/current, will proceed with Du for space computation calculation,
18:46:59.374 WARN FsDatasetImpl - dfsUsed file missing in /tmp/minicluster_storage7250618181682918930/data/data2/current/BP-826704803-10.1.0.176-1747680419017/current, will proceed with Du for space computation calculation,
18:46:59.390 INFO FsDatasetImpl - Time taken to scan block pool BP-826704803-10.1.0.176-1747680419017 on /tmp/minicluster_storage7250618181682918930/data/data1: 16ms
18:46:59.390 INFO FsDatasetImpl - Time taken to scan block pool BP-826704803-10.1.0.176-1747680419017 on /tmp/minicluster_storage7250618181682918930/data/data2: 16ms
18:46:59.390 INFO FsDatasetImpl - Total time to scan all replicas for block pool BP-826704803-10.1.0.176-1747680419017: 17ms
18:46:59.390 INFO FsDatasetImpl - Adding replicas to map for block pool BP-826704803-10.1.0.176-1747680419017 on volume /tmp/minicluster_storage7250618181682918930/data/data1...
18:46:59.391 INFO BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage7250618181682918930/data/data1/current/BP-826704803-10.1.0.176-1747680419017/current/replicas doesn't exist
18:46:59.391 INFO FsDatasetImpl - Adding replicas to map for block pool BP-826704803-10.1.0.176-1747680419017 on volume /tmp/minicluster_storage7250618181682918930/data/data2...
18:46:59.391 INFO BlockPoolSlice - Replica Cache file: /tmp/minicluster_storage7250618181682918930/data/data2/current/BP-826704803-10.1.0.176-1747680419017/current/replicas doesn't exist
18:46:59.391 INFO FsDatasetImpl - Time to add replicas to map for block pool BP-826704803-10.1.0.176-1747680419017 on volume /tmp/minicluster_storage7250618181682918930/data/data1: 0ms
18:46:59.391 INFO FsDatasetImpl - Time to add replicas to map for block pool BP-826704803-10.1.0.176-1747680419017 on volume /tmp/minicluster_storage7250618181682918930/data/data2: 0ms
18:46:59.391 INFO FsDatasetImpl - Total time to add all replicas to map for block pool BP-826704803-10.1.0.176-1747680419017: 0ms
18:46:59.391 INFO ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage7250618181682918930/data/data1
18:46:59.391 INFO DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage7250618181682918930/data/data1
18:46:59.391 INFO ThrottledAsyncChecker - Scheduling a check for /tmp/minicluster_storage7250618181682918930/data/data2
18:46:59.391 INFO DatasetVolumeChecker - Scheduled health check for volume /tmp/minicluster_storage7250618181682918930/data/data2
18:46:59.391 INFO VolumeScanner - Now scanning bpid BP-826704803-10.1.0.176-1747680419017 on volume /tmp/minicluster_storage7250618181682918930/data/data1
18:46:59.391 WARN DirectoryScanner - dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value above 1000 ms/sec. Assuming default value of -1
18:46:59.391 INFO VolumeScanner - Now scanning bpid BP-826704803-10.1.0.176-1747680419017 on volume /tmp/minicluster_storage7250618181682918930/data/data2
18:46:59.392 INFO DirectoryScanner - Periodic Directory Tree Verification scan starting in 12278825ms with interval of 21600000ms and throttle limit of -1ms/s
18:46:59.392 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage7250618181682918930/data/data1, DS-5407b9ec-75b9-4e05-9f65-375f5cbd35cc): finished scanning block pool BP-826704803-10.1.0.176-1747680419017
18:46:59.392 INFO DataNode - Block pool BP-826704803-10.1.0.176-1747680419017 (Datanode Uuid 0e759092-4c45-4a83-91c4-0eaec324bb71) service to localhost/127.0.0.1:34375 beginning handshake with NN
18:46:59.392 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage7250618181682918930/data/data2, DS-b3060a69-54fd-40d7-96ee-f98ec9db0f74): finished scanning block pool BP-826704803-10.1.0.176-1747680419017
18:46:59.392 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage7250618181682918930/data/data1, DS-5407b9ec-75b9-4e05-9f65-375f5cbd35cc): no suitable block pools found to scan. Waiting 1814399999 ms.
18:46:59.392 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage7250618181682918930/data/data2, DS-b3060a69-54fd-40d7-96ee-f98ec9db0f74): no suitable block pools found to scan. Waiting 1814399999 ms.
18:46:59.392 INFO StateChange - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:37235, datanodeUuid=0e759092-4c45-4a83-91c4-0eaec324bb71, infoPort=41691, infoSecurePort=0, ipcPort=39955, storageInfo=lv=-57;cid=testClusterID;nsid=402021288;c=1747680419017) storage 0e759092-4c45-4a83-91c4-0eaec324bb71
18:46:59.392 INFO NetworkTopology - Adding a new node: /default-rack/127.0.0.1:37235
18:46:59.392 INFO BlockReportLeaseManager - Registered DN 0e759092-4c45-4a83-91c4-0eaec324bb71 (127.0.0.1:37235).
18:46:59.393 INFO DataNode - Block pool BP-826704803-10.1.0.176-1747680419017 (Datanode Uuid 0e759092-4c45-4a83-91c4-0eaec324bb71) service to localhost/127.0.0.1:34375 successfully registered with NN
18:46:59.393 INFO DataNode - For namenode localhost/127.0.0.1:34375 using BLOCKREPORT_INTERVAL of 21600000msecs CACHEREPORT_INTERVAL of 10000msecs Initial delay: 0msecs; heartBeatInterval=3000
18:46:59.393 INFO DataNode - Starting IBR Task Handler.
18:46:59.393 INFO DatanodeDescriptor - Adding new storage ID DS-5407b9ec-75b9-4e05-9f65-375f5cbd35cc for DN 127.0.0.1:37235
18:46:59.393 INFO DatanodeDescriptor - Adding new storage ID DS-b3060a69-54fd-40d7-96ee-f98ec9db0f74 for DN 127.0.0.1:37235
18:46:59.394 INFO DataNode - After receiving heartbeat response, updating state of namenode localhost:34375 to active
18:46:59.394 INFO BlockStateChange - BLOCK* processReport 0x8a260d2c2d120438 with lease ID 0xff928d5fd7e0dff8: Processing first storage report for DS-b3060a69-54fd-40d7-96ee-f98ec9db0f74 from datanode DatanodeRegistration(127.0.0.1:37235, datanodeUuid=0e759092-4c45-4a83-91c4-0eaec324bb71, infoPort=41691, infoSecurePort=0, ipcPort=39955, storageInfo=lv=-57;cid=testClusterID;nsid=402021288;c=1747680419017)
18:46:59.394 INFO BlockStateChange - BLOCK* processReport 0x8a260d2c2d120438 with lease ID 0xff928d5fd7e0dff8: from storage DS-b3060a69-54fd-40d7-96ee-f98ec9db0f74 node DatanodeRegistration(127.0.0.1:37235, datanodeUuid=0e759092-4c45-4a83-91c4-0eaec324bb71, infoPort=41691, infoSecurePort=0, ipcPort=39955, storageInfo=lv=-57;cid=testClusterID;nsid=402021288;c=1747680419017), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0
18:46:59.394 INFO BlockStateChange - BLOCK* processReport 0x8a260d2c2d120438 with lease ID 0xff928d5fd7e0dff8: Processing first storage report for DS-5407b9ec-75b9-4e05-9f65-375f5cbd35cc from datanode DatanodeRegistration(127.0.0.1:37235, datanodeUuid=0e759092-4c45-4a83-91c4-0eaec324bb71, infoPort=41691, infoSecurePort=0, ipcPort=39955, storageInfo=lv=-57;cid=testClusterID;nsid=402021288;c=1747680419017)
18:46:59.394 INFO BlockStateChange - BLOCK* processReport 0x8a260d2c2d120438 with lease ID 0xff928d5fd7e0dff8: from storage DS-5407b9ec-75b9-4e05-9f65-375f5cbd35cc node DatanodeRegistration(127.0.0.1:37235, datanodeUuid=0e759092-4c45-4a83-91c4-0eaec324bb71, infoPort=41691, infoSecurePort=0, ipcPort=39955, storageInfo=lv=-57;cid=testClusterID;nsid=402021288;c=1747680419017), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
18:46:59.394 INFO DataNode - Successfully sent block report 0x8a260d2c2d120438 with lease ID 0xff928d5fd7e0dff8 to namenode: localhost/127.0.0.1:34375, containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 0 msecs to generate and 1 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
18:46:59.394 INFO DataNode - Got finalize command for block pool BP-826704803-10.1.0.176-1747680419017
18:46:59.442 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=datanodeReport src=null dst=null perm=null proto=rpc
18:46:59.442 INFO MiniDFSCluster - Cluster is active
18:46:59.444 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
18:46:59.445 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/user/runner/hdfs.bam dst=null perm=runner:supergroup:rw-r--r-- proto=rpc
18:46:59.446 INFO StateChange - BLOCK* allocate blk_1073741825_1001, replicas=127.0.0.1:37235 for /user/runner/hdfs.bam
18:46:59.447 INFO DataNode - Receiving BP-826704803-10.1.0.176-1747680419017:blk_1073741825_1001 src: /127.0.0.1:57926 dest: /127.0.0.1:37235
18:46:59.448 INFO clienttrace - src: /127.0.0.1:57926, dest: /127.0.0.1:37235, bytes: 2396, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1318397286_1, offset: 0, srvID: 0e759092-4c45-4a83-91c4-0eaec324bb71, blockid: BP-826704803-10.1.0.176-1747680419017:blk_1073741825_1001, duration(ns): 420052
18:46:59.449 INFO DataNode - PacketResponder: BP-826704803-10.1.0.176-1747680419017:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating
18:46:59.449 INFO StateChange - DIR* completeFile: /user/runner/hdfs.bam is closed by DFSClient_NONMAPREDUCE_-1318397286_1
18:46:59.450 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
18:46:59.450 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=create src=/user/runner/hdfs.bai dst=null perm=runner:supergroup:rw-r--r-- proto=rpc
18:46:59.451 INFO StateChange - BLOCK* allocate blk_1073741826_1002, replicas=127.0.0.1:37235 for /user/runner/hdfs.bai
18:46:59.452 INFO DataNode - Receiving BP-826704803-10.1.0.176-1747680419017:blk_1073741826_1002 src: /127.0.0.1:57932 dest: /127.0.0.1:37235
18:46:59.453 INFO clienttrace - src: /127.0.0.1:57932, dest: /127.0.0.1:37235, bytes: 656, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1318397286_1, offset: 0, srvID: 0e759092-4c45-4a83-91c4-0eaec324bb71, blockid: BP-826704803-10.1.0.176-1747680419017:blk_1073741826_1002, duration(ns): 364974
18:46:59.453 INFO DataNode - PacketResponder: BP-826704803-10.1.0.176-1747680419017:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating
18:46:59.454 INFO StateChange - DIR* completeFile: /user/runner/hdfs.bai is closed by DFSClient_NONMAPREDUCE_-1318397286_1
18:46:59.456 INFO MemoryStore - Block broadcast_612 stored as values in memory (estimated size 298.3 KiB, free 1919.7 MiB)
18:46:59.463 INFO MemoryStore - Block broadcast_612_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1919.7 MiB)
18:46:59.463 INFO BlockManagerInfo - Added broadcast_612_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1920.0 MiB)
18:46:59.463 INFO SparkContext - Created broadcast 612 from newAPIHadoopFile at PathSplitSource.java:96
18:46:59.483 INFO FileInputFormat - Total input files to process : 1
18:46:59.518 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:212
18:46:59.518 INFO DAGScheduler - Got job 226 (collect at ReadsSparkSourceUnitTest.java:212) with 1 output partitions
18:46:59.518 INFO DAGScheduler - Final stage: ResultStage 290 (collect at ReadsSparkSourceUnitTest.java:212)
18:46:59.518 INFO DAGScheduler - Parents of final stage: List()
18:46:59.518 INFO DAGScheduler - Missing parents: List()
18:46:59.518 INFO DAGScheduler - Submitting ResultStage 290 (MapPartitionsRDD[1476] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:59.535 INFO MemoryStore - Block broadcast_613 stored as values in memory (estimated size 426.4 KiB, free 1919.2 MiB)
18:46:59.537 INFO MemoryStore - Block broadcast_613_piece0 stored as bytes in memory (estimated size 154.0 KiB, free 1919.1 MiB)
18:46:59.537 INFO BlockManagerInfo - Added broadcast_613_piece0 in memory on localhost:45727 (size: 154.0 KiB, free: 1919.8 MiB)
18:46:59.537 INFO SparkContext - Created broadcast 613 from broadcast at DAGScheduler.scala:1580
18:46:59.537 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 290 (MapPartitionsRDD[1476] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:59.537 INFO TaskSchedulerImpl - Adding task set 290.0 with 1 tasks resource profile 0
18:46:59.537 INFO TaskSetManager - Starting task 0.0 in stage 290.0 (TID 377) (localhost, executor driver, partition 0, PROCESS_LOCAL, 7899 bytes)
18:46:59.538 INFO Executor - Running task 0.0 in stage 290.0 (TID 377)
18:46:59.567 INFO NewHadoopRDD - Input split: file:/home/runner/work/gatk/gatk/src/test/resources/org/broadinstitute/hellbender/engine/spark/datasources/ReadsSparkSource/hdfs_file_test.bam:0+2396
18:46:59.569 INFO Executor - Finished task 0.0 in stage 290.0 (TID 377). 5288 bytes result sent to driver
18:46:59.570 INFO TaskSetManager - Finished task 0.0 in stage 290.0 (TID 377) in 33 ms on localhost (executor driver) (1/1)
18:46:59.570 INFO TaskSchedulerImpl - Removed TaskSet 290.0, whose tasks have all completed, from pool
18:46:59.570 INFO DAGScheduler - ResultStage 290 (collect at ReadsSparkSourceUnitTest.java:212) finished in 0.051 s
18:46:59.570 INFO DAGScheduler - Job 226 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:59.570 INFO TaskSchedulerImpl - Killing all running tasks in stage 290: Stage finished
18:46:59.570 INFO DAGScheduler - Job 226 finished: collect at ReadsSparkSourceUnitTest.java:212, took 0.052121 s
18:46:59.571 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
18:46:59.571 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
18:46:59.571 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
18:46:59.572 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
18:46:59.572 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam.bai dst=null perm=null proto=rpc
18:46:59.573 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
18:46:59.573 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
18:46:59.573 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
18:46:59.576 WARN DFSUtil - Unexpected value for data transfer bytes=664 duration=0
18:46:59.576 WARN DFSUtil - Unexpected value for data transfer bytes=664 duration=0
18:46:59.576 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam.sbi dst=null perm=null proto=rpc
18:46:59.577 INFO MemoryStore - Block broadcast_614 stored as values in memory (estimated size 298.2 KiB, free 1918.8 MiB)
18:46:59.583 INFO MemoryStore - Block broadcast_614_piece0 stored as bytes in memory (estimated size 50.2 KiB, free 1918.8 MiB)
18:46:59.584 INFO BlockManagerInfo - Added broadcast_614_piece0 in memory on localhost:45727 (size: 50.2 KiB, free: 1919.8 MiB)
18:46:59.584 INFO SparkContext - Created broadcast 614 from newAPIHadoopFile at PathSplitSource.java:96
18:46:59.603 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
18:46:59.604 INFO FileInputFormat - Total input files to process : 1
18:46:59.604 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
18:46:59.639 INFO SparkContext - Starting job: collect at ReadsSparkSourceUnitTest.java:213
18:46:59.639 INFO DAGScheduler - Got job 227 (collect at ReadsSparkSourceUnitTest.java:213) with 1 output partitions
18:46:59.639 INFO DAGScheduler - Final stage: ResultStage 291 (collect at ReadsSparkSourceUnitTest.java:213)
18:46:59.639 INFO DAGScheduler - Parents of final stage: List()
18:46:59.639 INFO DAGScheduler - Missing parents: List()
18:46:59.639 INFO DAGScheduler - Submitting ResultStage 291 (MapPartitionsRDD[1483] at filter at ReadsSparkSource.java:96), which has no missing parents
18:46:59.656 INFO MemoryStore - Block broadcast_615 stored as values in memory (estimated size 426.4 KiB, free 1918.3 MiB)
18:46:59.657 INFO MemoryStore - Block broadcast_615_piece0 stored as bytes in memory (estimated size 154.0 KiB, free 1918.2 MiB)
18:46:59.657 INFO BlockManagerInfo - Added broadcast_615_piece0 in memory on localhost:45727 (size: 154.0 KiB, free: 1919.6 MiB)
18:46:59.657 INFO SparkContext - Created broadcast 615 from broadcast at DAGScheduler.scala:1580
18:46:59.657 INFO DAGScheduler - Submitting 1 missing tasks from ResultStage 291 (MapPartitionsRDD[1483] at filter at ReadsSparkSource.java:96) (first 15 tasks are for partitions Vector(0))
18:46:59.657 INFO TaskSchedulerImpl - Adding task set 291.0 with 1 tasks resource profile 0
18:46:59.658 INFO TaskSetManager - Starting task 0.0 in stage 291.0 (TID 378) (localhost, executor driver, partition 0, ANY, 7796 bytes)
18:46:59.658 INFO Executor - Running task 0.0 in stage 291.0 (TID 378)
18:46:59.686 INFO NewHadoopRDD - Input split: hdfs://localhost:34375/user/runner/hdfs.bam:0+2396
18:46:59.687 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
18:46:59.688 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
18:46:59.688 WARN DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
18:46:59.689 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
18:46:59.689 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
18:46:59.689 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam.bai dst=null perm=null proto=rpc
18:46:59.690 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
18:46:59.690 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
18:46:59.691 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
18:46:59.692 WARN DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
18:46:59.693 WARN DFSUtil - Unexpected value for data transfer bytes=664 duration=0
18:46:59.693 WARN DFSUtil - Unexpected value for data transfer bytes=664 duration=0
18:46:59.693 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
18:46:59.693 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
18:46:59.694 WARN DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
18:46:59.694 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
18:46:59.695 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bam dst=null perm=null proto=rpc
18:46:59.695 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bam.bai dst=null perm=null proto=rpc
18:46:59.695 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
18:46:59.696 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=getfileinfo src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
18:46:59.696 INFO audit - allowed=true ugi=runner (auth:SIMPLE) ip=/127.0.0.1 cmd=open src=/user/runner/hdfs.bai dst=null perm=null proto=rpc
18:46:59.702 INFO BlockManagerInfo - Removed broadcast_613_piece0 on localhost:45727 in memory (size: 154.0 KiB, free: 1919.8 MiB)
18:46:59.702 WARN DFSUtil - Unexpected value for data transfer bytes=2416 duration=0
18:46:59.703 INFO BlockManagerInfo - Removed broadcast_612_piece0 on localhost:45727 in memory (size: 50.2 KiB, free: 1919.8 MiB)
18:46:59.703 WARN DFSUtil - Unexpected value for data transfer bytes=664 duration=0
18:46:59.703 WARN DFSUtil - Unexpected value for data transfer bytes=664 duration=0
18:46:59.704 INFO Executor - Finished task 0.0 in stage 291.0 (TID 378). 5331 bytes result sent to driver
18:46:59.704 INFO TaskSetManager - Finished task 0.0 in stage 291.0 (TID 378) in 46 ms on localhost (executor driver) (1/1)
18:46:59.704 INFO TaskSchedulerImpl - Removed TaskSet 291.0, whose tasks have all completed, from pool
18:46:59.704 INFO DAGScheduler - ResultStage 291 (collect at ReadsSparkSourceUnitTest.java:213) finished in 0.064 s
18:46:59.704 INFO DAGScheduler - Job 227 is finished. Cancelling potential speculative or zombie tasks for this job
18:46:59.704 INFO TaskSchedulerImpl - Killing all running tasks in stage 291: Stage finished
18:46:59.704 INFO DAGScheduler - Job 227 finished: collect at ReadsSparkSourceUnitTest.java:213, took 0.065531 s
18:46:59.705 INFO MiniDFSCluster - Shutting down the Mini HDFS Cluster
18:46:59.705 INFO MiniDFSCluster - Shutting down DataNode 0
18:46:59.705 INFO DirectoryScanner - Shutdown has been called
18:46:59.705 INFO DataNode - Closing all peers.
18:46:59.705 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage7250618181682918930/data/data1, DS-5407b9ec-75b9-4e05-9f65-375f5cbd35cc) exiting.
18:46:59.705 INFO VolumeScanner - VolumeScanner(/tmp/minicluster_storage7250618181682918930/data/data2, DS-b3060a69-54fd-40d7-96ee-f98ec9db0f74) exiting.
18:46:59.708 INFO ContextHandler - Stopped o.e.j.w.WebAppContext@592c6916{datanode,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/datanode}
18:46:59.708 INFO AbstractConnector - Stopped ServerConnector@18919f2c{HTTP/1.1, (http/1.1)}{localhost:0}
18:46:59.708 INFO session - node0 Stopped scavenging
18:46:59.708 INFO ContextHandler - Stopped o.e.j.s.ServletContextHandler@38bf2698{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
18:46:59.709 INFO DataNode - Waiting up to 30 seconds for transfer threads to complete
18:46:59.709 INFO Server - Stopping server on 39955
18:46:59.710 INFO Server - Stopping IPC Server listener on 0
18:46:59.710 INFO Server - Stopping IPC Server Responder
18:46:59.710 WARN IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
18:46:59.710 WARN DataNode - Ending block pool service for: Block pool BP-826704803-10.1.0.176-1747680419017 (Datanode Uuid 0e759092-4c45-4a83-91c4-0eaec324bb71) service to localhost/127.0.0.1:34375
18:46:59.710 INFO DataNode - Removed Block pool BP-826704803-10.1.0.176-1747680419017 (Datanode Uuid 0e759092-4c45-4a83-91c4-0eaec324bb71)
18:46:59.710 INFO FsDatasetImpl - Removing block pool BP-826704803-10.1.0.176-1747680419017
18:46:59.710 ERROR DataNode - Command processor encountered interrupt and exit.
18:46:59.710 WARN DataNode - Ending command processor service for: Thread[Command processor,5,main]
18:46:59.710 WARN IncrementalBlockReportManager - IncrementalBlockReportManager interrupted
18:46:59.710 WARN CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
18:46:59.711 WARN CachingGetSpaceUsed - Thread Interrupted waiting to refresh disk information: sleep interrupted
18:46:59.711 INFO FsDatasetAsyncDiskService - Shutting down all async disk service threads
18:46:59.711 INFO FsDatasetAsyncDiskService - All async disk service threads have been shut down
18:46:59.711 INFO RamDiskAsyncLazyPersistService - Shutting down all async lazy persist service threads
18:46:59.711 INFO RamDiskAsyncLazyPersistService - All async lazy persist service threads have been shut down
18:46:59.711 INFO DataNode - Shutdown complete.
18:46:59.711 INFO MiniDFSCluster - Shutting down the namenode
18:46:59.711 INFO FSNamesystem - Stopping services started for active state
18:46:59.711 INFO FSNamesystem - LazyPersistFileScrubber was interrupted, exiting
18:46:59.711 INFO FSNamesystem - NameNodeEditLogRoller was interrupted, exiting
18:46:59.711 INFO FSEditLog - Ending log segment 1, 13
18:46:59.712 INFO FSEditLog - Number of transactions: 14 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 2 Number of syncs: 13 SyncTimes(ms): 0 1
18:46:59.712 INFO FileJournalManager - Finalizing edits file /tmp/minicluster_storage7250618181682918930/name-0-1/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage7250618181682918930/name-0-1/current/edits_0000000000000000001-0000000000000000014
18:46:59.712 INFO FileJournalManager - Finalizing edits file /tmp/minicluster_storage7250618181682918930/name-0-2/current/edits_inprogress_0000000000000000001 -> /tmp/minicluster_storage7250618181682918930/name-0-2/current/edits_0000000000000000001-0000000000000000014
18:46:59.712 INFO FSEditLog - FSEditLogAsync was interrupted, exiting
18:46:59.713 INFO CacheReplicationMonitor - Shutting down CacheReplicationMonitor
18:46:59.713 INFO Server - Stopping server on 34375
18:46:59.713 INFO Server - Stopping IPC Server Responder
18:46:59.713 INFO Server - Stopping IPC Server listener on 0
18:46:59.714 INFO BlockManager - Stopping MarkedDeleteBlockScrubber.
18:46:59.714 INFO BlockManager - Stopping RedundancyMonitor.
18:46:59.717 INFO FSNamesystem - Stopping services started for active state
18:46:59.717 INFO FSNamesystem - Stopping services started for standby state
18:46:59.718 INFO ContextHandler - Stopped o.e.j.w.WebAppContext@26c29fc5{hdfs,/,null,STOPPED}{jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/hdfs}
18:46:59.718 INFO AbstractConnector - Stopped ServerConnector@4411d442{HTTP/1.1, (http/1.1)}{localhost:0}
18:46:59.718 INFO session - node0 Stopped scavenging
18:46:59.719 INFO ContextHandler - Stopped o.e.j.s.ServletContextHandler@7407565b{static,/static,jar:file:/home/runner/.gradle/caches/modules-2/files-2.1/org.apache.hadoop/hadoop-hdfs/3.3.6/5058b645375c6a68f509e167ad6a6ada9642df09/hadoop-hdfs-3.3.6-tests.jar!/webapps/static,STOPPED}
18:46:59.719 INFO MetricsSystemImpl - Stopping DataNode metrics system...
18:46:59.719 INFO MetricsSystemImpl - DataNode metrics system stopped.
18:46:59.719 INFO MetricsSystemImpl - DataNode metrics system shutdown complete.