A Plug-and-Play Method for Controlled Text Generation

@article{Pascual2021APM,
  title={A Plug-and-Play Method for Controlled Text Generation},
  author={Damian Pascual and B{\'e}ni Egressy and Clara Meister and Ryan Cotterell and Roger Wattenhofer},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.09707}
}
Large pre-trained language models have repeatedly shown their ability to produce fluent text. Yet even when starting from a prompt, generation can continue in many plausible directions. Current decoding methods with the goal of controlling generation, e.g., to ensure specific words are included, either require additional models or fine-tuning, or work poorly when the task at hand is semantically unconstrained, e.g., story generation. In this work, we present a plug-and-play decoding method for… 

Figures and Tables from this paper

Fine-Grained Controllable Text Generation Using Non-Residual Prompting

This work proposes an encoder-decoder architecture that enables intermediate text prompts at arbitrary time steps, and proposes a resource-efficient method for converting a pre-trained CLM into this architecture, and demonstrates its potential on various experiments, including the novel task of contextualized word inclusion.

Uniform Complexity for Text Generation

UCTG is introduced which serves as a challenge to make existing models generate uniformly complex text with respect to inputs or prompts used and lays down potential methods and approaches which can be incorporated into the general framework of steer-ing language models towards addressing this important challenge.

A Survey of Pretrained Language Models Based Text Generation

This survey presents the recent advances achieved in the topic of PLMs for text generation and introduces three key points of applying PLMs to text generation: how to encode the input data as representations preserving input semantics which can be fused into PLMs.

Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio

It is found that most language models generate compelling text even under significant constraints, and a simple and universally applicable technique for modifying the output of a language model by compositionally applying filter functions to the language models vocabulary before a unit of text is generated is presented.

A Survey of Controllable Text Generation using Transformer-based Pre-trained Language Models

This is the first survey paper to summarize CTG techniques from the perspective of PLMs, and it is hoped it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.

Pretrained Language Models for Text Generation: A Survey

This paper presents an overview of the major advances achieved in the topic of pretrained language models for text generation and discusses how to adapt existing PLMs to model different input data and satisfy special properties in the generated text.

PCFG-based Natural Language Interface Improves Generalization for Controlled Text Generation

This work proposes a natural language (NL) interface, where a PCFG isCrafted to embed the control attributes into natural language commands, and proposes variants of existing CTG models that take commands as input.

Generating Training Data with Language Models: Towards Zero-Shot Language Understanding

This paper presents a simple approach that uses both types of PLMs for fully zero-shot learning of NLU tasks without requiring any task-specific data: a unidirectional PLM generates class-conditioned texts guided by prompts, which are used as the training data for a bidirectionalPLM.

Gradient-Based Constrained Sampling from Language Models

This work proposes M U C O L A —a sampling procedure that combines the log-likelihood of the language model with arbitrary (differentiable) constraints in a single energy function, and then generates samples in a non-autoregressive manner.

Constrained Sampling from Language Models via Langevin Dynamics in Embedding Spaces

This work proposes a sampling procedure that combines the log-likelihood of the language model with arbitrary differentiable constraints into a single energy function; and generates samples by initializing the entire output sequence with noise and following a Markov chain defined by Langevin Dynamics using the gradients of this energy.

References

SHOWING 1-10 OF 45 REFERENCES

Plug and Play Language Models: A Simple Approach to Controlled Text Generation

The Plug and Play Language Model (PPLM) for controllable language generation is proposed, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM.

Sparse Text Generation

This paper uses the recently introduced entmax transformation to train and sample from a natively sparse language model, avoiding this mismatch between training and testing conditions and proposing three new metrics for comparing sparse or truncated distributions: $\epsilon$-perplexity, sparsemax score, and Jensen-Shannon divergence.

Changing the Mind of Transformers for Topically-Controllable Language Generation

A framework that displays multiple candidate upcoming topics and produces a set of candidate topics by predicting the centers of word clusters in the possible continuations, and a text generation model whose output adheres to the chosen topics.

GeDi: Generative Discriminator Guided Sequence Generation

GeDi is proposed as an efficient method for using smaller LMs as generative discriminators to guide generation from large LMs to make them safer and more controllable, and is found that GeDi gives stronger controllability than the state of the art method while also achieving generation speeds more than 30 times faster.

Backward and Forward Language Modeling for Constrained Sentence Generation

A novel backward and forward language model that uses RNNs to generate previous words and future words, either simultaneously or asynchronously, resulting in two model variants that could appear at any position in the sentence.

Neural Text Generation with Unlikelihood Training

It is shown that the likelihood objective itself is at fault, resulting in a model that assigns too much probability to sequences containing repeats and frequent words, unlike those from the human training distribution, thus providing a strong alternative to existing techniques.

CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling

This paper proposes CGMH, a novel approach using Metropolis-Hastings sampling for constrained sentence generation that allows complicated constraints such as the occurrence of multiple keywords in the target sentences, which cannot be handled in traditional RNN-based approaches.

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.

CTRL: A Conditional Transformer Language Model for Controllable Generation

CTRL is released, a 1.63 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior, providing more explicit control over text generation.

Gradient-guided Unsupervised Lexically Constrained Text Generation

This paper proposes a novel method G2LC to solve the lexically-constrained generation as an unsupervised gradient-guided optimization problem and proposes a differentiable objective function and uses the gradient to help determine which position in the sequence should be changed (deleted or inserted/replaced by another word).