Continuation is a Sub-Task of Fill-in-the-Blank: Why Not Train for Both?
The task of fill-in-the-blank is useful for a variety of applications where writers interact with a natural language generation (NLG) system to craft text.
However, much more research focus has been placed on training models that do continuation--appending text to the end of a passage.
Since continuation is in fact a sub-task of fill-in-the-blank where the blank is placed at the sequence's end, we propose the training of a single model which can effectively handle both these tasks.
The result is improved efficiency--as only one model needs to be maintained--without any negative impact to performance on either task.