• Corpus ID: 29797603

Generating Text via Adversarial Training

@inproceedings{Zhang2016GeneratingTV,
  title={Generating Text via Adversarial Training},
  author={Yizhe Zhang and Zhe Gan and Lawrence Carin},
  year={2016}
}
Generative Adversarial Networks (GANs) have achieved great success in generating realistic synthetic real-valued data. However, the discrete output of language model hinders the application of gradient-based GANs. In this paper we propose a generic framework employing Long short-term Memory (LSTM) and convolutional neural network (CNN) for adversarial training to generate realistic text. Instead of using standard objective of GAN, we match the feature distribution when training the generator… 

Figures and Tables from this paper

Adversarial Feature Matching for Text Generation
TLDR
This work proposes a framework for generating realistic text via adversarial training, using a long short-term memory network as generator, and a convolutional network as discriminator, and proposes matching the high-dimensional latent feature distributions of real and synthetic sentences, via a kernelized discrepancy metric.
Language Generation with Recurrent Generative Adversarial Networks without Pre-training
TLDR
It is shown that recurrent neural networks can be trained to generate text with GANs from scratch by slowly teaching the model to generate sequences of increasing and variable length, which vastly improves the quality of generated sequences compared to a convolutional baseline.
Text-To-Text Generative Adversarial Networks
TLDR
A novel Text-to-Text Generative Adversarial Networks (TT-GAN) is developed, which is the first framework capable of generating natural language on semantic level in real sense, and gives a new perspective to apply GAN on NLP research.
CMPS 242 Final Project Report : GANTOR
Generative Adversarial Networks (GANs) are a very popular deep learning model for image recognition and generation that demonstrated the ability to generate realistic images. They consist of two
A Continuous Approach to Controllable Text Generation using Generative Adversarial Networks
TLDR
This work proposes a novel approach that requires no modification to the training process introduced by Goodfellow et al.
Evolutionary Generative Adversarial Networks
TLDR
A novel GAN framework called evolutionary GANs (E-GANs) is proposed for stable GAN training and improved generative performance, which overcomes the limitations of an individual adversarial training objective and always preserves the well-performing offspring, contributing to progress in, and the success of GAns.
A Research on Generative Adversarial Networks Applied to Text Generation
TLDR
An improved model based on GAN is proposed, specifically using the transformer network structure instead of the original general Convolutional Neural Network or Recurrent Neural Networks as generator, and using the reinforcement learning algorithm Actor-Critic to improve the model training method.
Generative Adversarial Nets for Multiple Text Corpora
TLDR
This work demonstrates the GAN models on real-world text data sets from different corpora, and shows that embeddings from both models lead to improvements in supervised learning problems.
News text generation with adversarial deep learning
TLDR
It is shown that it is possible to use generative adversarial networks to generate sequences of tokens that resemble natural language, but this does not yet reach the quality of human-written text.
Generating Text using Generative Adversarial Networks and Quick-Thought Vectors
TLDR
This paper presents a Quick-Thought GAN (QTGAN) to generate sentences by incorporating the Quick- Thought model, which offers richer representations than prior unsupervised and supervised methods and enable a classifier to distinguish context sentences from other contrastive sentences.
...
...

References

SHOWING 1-10 OF 24 REFERENCES
Unrolled Generative Adversarial Networks
TLDR
This work introduces a method to stabilize Generative Adversarial Networks by defining the generator objective with respect to an unrolled optimization of the discriminator, and shows how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Generating Sentences from a Continuous Space
TLDR
This work introduces and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences that allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features.
Skip-Thought Vectors
We describe an approach for unsupervised learning of a generic, distributed sentence encoder. Using the continuity of text from books, we train an encoder-decoder model that tries to reconstruct the
A Convolutional Neural Network for Modelling Sentences
TLDR
A convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) is described that is adopted for the semantic modelling of sentences and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations.
Sequence to Sequence Learning with Neural Networks
TLDR
This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
Convolutional Neural Networks for Sentence Classification
TLDR
The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification, and are proposed to allow for the use of both task-specific and static vectors.
Semi-supervised Sequence Learning
TLDR
Two approaches to use unlabeled data to improve Sequence Learning with recurrent networks are presented and it is found that long short term memory recurrent networks after pretrained with the two approaches become more stable to train and generalize better.
Convolutional Neural Network Architectures for Matching Natural Language Sentences
TLDR
Convolutional neural network models for matching two sentences are proposed, by adapting the convolutional strategy in vision and speech and nicely represent the hierarchical structures of sentences with their layer-by-layer composition and pooling.
...
...