GeDi: Generative Discriminator Guided Sequence Generation

@inproceedings{Krause2021GeDiGD,
  title={GeDi: Generative Discriminator Guided Sequence Generation},
  author={Ben Krause and Akhilesh Deepak Gotmare and Bryan McCann and Nitish Shirish Keskar and Shafiq R. Joty and Richard Socher and Nazneen Rajani},
  booktitle={Conference on Empirical Methods in Natural Language Processing},
  year={2021}
}
While large-scale language models (LMs) are able to imitate the distribution of natural language well enough to generate realistic text, it is difficult to control which regions of the distribution they generate. This is especially problematic because datasets used for training large LMs usually contain significant toxicity, hate, bias, and negativity. We propose GeDi as an efficient method for using smaller LMs as generative discriminators to guide generation from large LMs to make them safer… 

Directed Beam Search: Plug-and-Play Lexically Constrained Language Generation

Directed Beam Search is proposed, a plug-and-play method for lexically constrained language generation that can be applied to any language model, is easy to implement and can be used for general language generation.

Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm

It is suggested that the function of few-shot examples in these cases is better described as locating an already learned task rather than meta-learning, which motivates rethinking the role of prompts in controlling and evaluating powerful language models.

Prefix-Tuning: Optimizing Continuous Prompts for Generation

Prefix-tuning is proposed, a lightweight alternative to fine- Tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a sequence of continuous task-specific vectors, which is called the prefix.

IGA: An Intent-Guided Authoring Assistant

An interactive writing assistant that generates and rephrases text according to fine-grained author specifications and fine-tune a language model on a dataset heuristically-labeled with author intent is built.

Plug-and-Play Conversational Models

This paper proposes and evaluates plug-and-play methods for controllable response generation, and demonstrates a high degree of control over the generated conversational responses with regard to multiple desired attributes, while being fluent.

Contrastive Triple Extraction with Generative Transformer

This paper introduces a novel model, contrastive triple extraction with a generative transformer module for encoder-decoder-based generation and proposes a novel triplet contrastive training object to generate faithful results.

Towards Neural Programming Interfaces

The efficacy of the methods using OpenAI's GPT-2 model is demonstrated, successfully controlling noun selection, topic aversion, offensive speech filtering, and other aspects of language while largely maintaining the controlled model's fluency under deterministic settings.

Towards Personalised and Document-level Machine Translation of Dialogue

This thesis proposal focuses on PersNMT and DocNMT for the domain of dialogue extracted from TV subtitles in five languages: English, Brazilian Portuguese, German, French and Polish.

Recipes for Safety in Open-domain Chatbots

A new human-and-model-in-the-loop framework for both training safer models and for evaluating them, as well as a novel method to distill safety considerations inside generative models without the use of an external classifier at deployment time are introduced.

Detoxifying Language Models Risks Marginalizing Minority Voices

It is found that detoxification makes LMs more brittle to distribution shift, especially on language used by marginalized groups, and the tension between the controllability and distributional robustness of LMs is highlighted.
...

References

SHOWING 1-10 OF 63 REFERENCES

Plug and Play Language Models: A Simple Approach to Controlled Text Generation

The Plug and Play Language Model (PPLM) for controllable language generation is proposed, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM.

MaskGAN: Better Text Generation via Filling in the ______

This work introduces an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context and shows qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.

CTRL: A Conditional Transformer Language Model for Controllable Generation

CTRL is released, a 1.63 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior, providing more explicit control over text generation.

Toward Controlled Generation of Text

A new neural generative model is proposed which combines variational auto-encoders and holistic attribute discriminators for effective imposition of semantic structures inGeneric generation and manipulation of text.

Learning to Write with Cooperative Discriminators

Human evaluation demonstrates that text generated by the unified learning framework is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.

SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update.

Language Models are Few-Shot Learners

GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic.

Language Models are Unsupervised Multitask Learners

It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.

Improving Language Understanding by Generative Pre-Training

The general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, improving upon the state of the art in 9 out of the 12 tasks studied.
...