• Corpus ID: 245218671

NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics

@article{Lu2021NeuroLogicAD,
  title={NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics},
  author={Ximing Lu and Sean Welleck and Peter West and Liwei Jiang and Jungo Kasai and Daniel Khashabi and Ronan Le Bras and Lianhui Qin and Youngjae Yu and Rowan Zellers and Noah A. Smith and Yejin Choi},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.08726}
}
The dominant paradigm for neural text generation is left-to-right decoding from autoregressive language models. Constrained or controllable generation under complex lexical constraints, however, requires foresight to plan ahead feasible future paths. Drawing inspiration from the A* search algorithm, we propose NEUROLOGIC AFesque,1 a decoding algorithm that incorporates heuristic estimates of future cost. We develop efficient lookahead heuristics that are efficient for large-scale language… 
Sampling with Attribute-Related Information for Controlling Language Models
TLDR
Intentions into the input text (prompt) or the decoding process (guided decoding) for controllable generation.
Penguins Don't Fly: Reasoning about Generics through Instantiations and Exceptions
Generics express generalizations about the world (e.g., “birds can fly"). However, they are not universally true – while sparrows and penguins are both birds, only sparrows can fly and penguins cannot.
Internet-augmented language models through few-shot prompting for open-domain question answering
TLDR
This work uses few-shot prompting to learn to condition language models on information returned from the web using Google Search, a broad and constantly updated knowledge source, and finds that language models conditioned on the web surpass performance of closed-book models of similar, or even larger, model sizes in open-domain question answering.

References

SHOWING 1-10 OF 61 REFERENCES
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
TLDR
Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, it can be used to achieve significant gains in performance in domain adaptation scenarios.
Neural Text Generation with Unlikelihood Training
TLDR
It is shown that the likelihood objective itself is at fault, resulting in a model that assigns too much probability to sequences containing repeats and frequent words, unlike those from the human training distribution, thus providing a strong alternative to existing techniques.
Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation
TLDR
This work presents a algorithm for lexically constrained decoding with a complexity of O(1) in the number of constraints and demonstrates the algorithm’s remarkable ability to properly place constraints, and uses it to explore the shaky relationship between model and BLEU scores.
CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling
TLDR
This paper proposes CGMH, a novel approach using Metropolis-Hastings sampling for constrained sentence generation that allows complicated constraints such as the occurrence of multiple keywords in the target sentences, which cannot be handled in traditional RNN-based approaches.
Language Generation via Combinatorial Constraint Satisfaction: A Tree Search Enhanced Monte-Carlo Approach
TLDR
This work proposes TSMC, an efficient method to generate high likelihood sentences with respect to a pre-trained language model while satisfying the constraints, which is highly flexible, requires no task-specific train- ing, and leverages efficient constraint satisfaction solving techniques.
Non-Monotonic Sequential Text Generation
TLDR
This work proposes a framework for training models of text generation that operate in non-monotonic orders, and demonstrates that using the proposed method, it is possible to learn policies which generate text without pre-specifying a generation order while achieving competitive performance with conventional left-to-right generation.
Global Neural CCG Parsing with Optimality Guarantees
TLDR
This work introduces the first global recursive neural parsing model with optimality guarantees during decoding, and shows it is possible to learn an efficient A* parser.
A* Parsing: Fast Exact Viterbi Parse Selection
TLDR
This work presents an extension of the classic A* search procedure to tabular PCFG parsing, which is simpler to implement than an upward-propagating best-first parser, is correct for a wide range of parser control strategies and maintains worst-case cubic time.
Comparison of Diverse Decoding Methods from Conditional Language Models
TLDR
This work performs an extensive survey of decoding-time strategies for generating diverse outputs from a conditional language model, and presents a novel method where over-sample candidates, then use clustering to remove similar sequences, thus achieving high diversity without sacrificing quality.
Training Neural Machine Translation to Apply Terminology Constraints
TLDR
Comparative experiments show that the proposed method is not only more effective than a state-of-the-art implementation of constrained decoding, but is also as fast as constraint-free decoding.
...
1
2
3
4
5
...