InFillmore: Frame-Guided Language Generation with Bidirectional Context

  title={InFillmore: Frame-Guided Language Generation with Bidirectional Context},
  author={Jiefu Ou and Nathaniel Weir and Anton Belyy and Felix Yu and Benjamin Van Durme},
We propose a structured extension to bidirectional-context conditional language generation, or “infilling,” inspired by Frame Semantic theory. Guidance is provided through one of two approaches: (1) model fine-tuning, conditioning directly on observed symbolic frames, and (2) a novel extension to disjunctive lexically constrained decoding that leverages frame semantic lexical units. Automatic and human evaluations confirm that frame-guided generation allows for explicit manipulation of intended… Expand


Enabling Language Models to Fill in the Blanks
It is shown that humans have difficulty identifying sentences infilled by the approach, which can enable LMs to infill entire sentences effectively on three different domains: short stories, scientific abstracts, and lyrics. Expand
Guided generation
  • 2020
Plan, Write, and Revise: an Interactive System for Open-Domain Story Generation
A neural narrative generation system that interacts with humans to generate stories and finds that humans tasked with collaboratively improving a particular characteristic of a story are in fact able to do so, which has implications for future uses of human-in-the-loop systems. Expand
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
A new framework for evaluating story understanding and script learning: the 'Story Cloze Test', which requires a system to choose the correct ending to a four-sentence story, and a new corpus of ~50k five- Sentence commonsense stories, ROCStories, to enable this evaluation. Expand
Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting
Lexically-constrained sequence decoding allows for explicit positive or negative phrase-based constraints to be placed on target output strings in generation tasks such as machine translation orExpand
Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation
This work presents a algorithm for lexically constrained decoding with a complexity of O(1) in the number of constraints and demonstrates the algorithm’s remarkable ability to properly place constraints, and uses it to explore the shaky relationship between model and BLEU scores. Expand
COD3S: Diverse Generation with Discrete Semantic Signatures
This work presents COD3S, a novel method for generating semantically diverse sentences using neural sequence-to-sequence (seq2seq) models, and applies it to causal generation, the task of predicting a proposition's plausible causes or effects. Expand
Iterative Paraphrastic Augmentation with Discriminative Span Alignment
A novel paraphrastic augmentation strategy based on sentence-level lexically constrained paraphrasing and discriminative span alignment that allows for the large-scale expansion of existing datasets or the rapid creation of new datasets using a small, manually produced seed corpus. Expand
LOME: Large Ontology Multilingual Extraction
This work presents LOME, a system for performing multilingual information extraction that achieves this through the use of multilingual encoders like XLM-R (Conneau et al., 2020) and leveraging multilingual training data. Expand
Neural Machine Translation With Explicit Phrase Alignment
The key idea is to build a search space similar to that of phrase-based statistical machine translation for neural machine translation where phrase alignment is readily available and a new decoding algorithm is designed that can easily impose lexical and structural constraints. Expand