• Corpus ID: 3620298

DP-GAN: Diversity-Promoting Generative Adversarial Network for Generating Informative and Diversified Text

@article{Xu2018DPGANDG,
  title={DP-GAN: Diversity-Promoting Generative Adversarial Network for Generating Informative and Diversified Text},
  author={Jingjing Xu and Xu Sun and Xuancheng Ren and Junyang Lin and Bingzhen Wei and Wei Li},
  journal={ArXiv},
  year={2018},
  volume={abs/1802.01345}
}
Existing text generation methods tend to produce repeated and "boring" expressions. To tackle this problem, we propose a new text generation model, called Diversity-Promoting Generative Adversarial Network (DP-GAN). The proposed model assigns low reward for repeated text and high reward for "novel" text, encouraging the generator to produce diverse and informative text. Moreover, we propose a novel language-model based discriminator, which can better distinguish novel text from repeated text… 

Figures and Tables from this paper

EnsembleGAN: Adversarial Learning for Retrieval-Generation Ensemble Model on Short-Text Conversation

TLDR
This paper proposes ensembleGAN, an adversarial learning framework for enhancing a retrieval-generation ensemble model in open-domain conversation scenario that consists of a language-model-like generator, a ranker generator, and one ranker discriminator.

An Empirical Study on the Membership Inference Attack against Tabular Data Synthesis Models

TLDR
This paper conducts experiments on 4 state-of-the-art tabular data synthesis models under two attack scenarios, and finds that the membership inference attack can seriously jeopardize these models.

A Task-oriented Chatbot Based on LSTM and Reinforcement Learning

TLDR
This work proposes a method to build a task-oriented chatbot using a sentence generation model which generates sequences based on the generative adversarial network, which generates more diverse and information-rich sentences than those of the existing approaches.

Aero-Engine Faults Diagnosis Based on K-Means Improved Wasserstein GAN and Relevant Vector Machine

TLDR
A semi-supervised learning approach based on the Improved Wasserstein Generative Adversarial Networks and K-Means Cluster technique is proposed in this paper and can better fit the fault sample distribution, generate much more appropriate new samples by learning from the small number of fault samples.

Paraphrase Diversification Using Counterfactual Debiasing

TLDR
This work considers style transfer as a means of imposing diversity, with a paraphrasing correctness constraint that the target sentence must remain a paraphrase of the original sentence, and proposes a model that can generate more diverse and yet semantically consistent paraphrase sentences.

Exploring Diverse Expressions for Paraphrase Generation

TLDR
This paper proposes a novel approach with two discriminators and multiple generators to generate a variety of different paraphrases and demonstrates that the model not only gains a significant increase in diversity but also improves generation quality over several state-of-the-art baselines.

Linguistically-Informed Specificity and Semantic Plausibility for Dialogue Generation

TLDR
This work examines whether specificity is solely a frequency-related notion and finds that more linguistically-driven specificity measures are better suited to improving response informativeness, and develops a model using linguistically motivated specificity and plausibility reranking.

Automatic Scoring for Translations Based on Language Models

TLDR
The method is feasible to apply the evaluation metrics of dialogue systems to translation scoring, and it can provide an improvement idea for the automatic scoring of translations in the future.

Enhancing Text Generation via Parse Tree Embedding

TLDR
This work introduces a new generative model for NLG, called Tree-VAE, which samples a sentence from the training corpus and then generates a new sentence based on the corresponding parse tree embedding vector.

Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation

TLDR
Interpretable Directed Diversity is proposed to automatically predict ideation quality and diversity scores, and AI explanations — Attribution, Contrastive Attribution, and Counterfactual Suggestions — are provided to feedback on why ideations were scored (low), and how to get higher scores.
...

References

SHOWING 1-10 OF 47 REFERENCES

SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

TLDR
Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update.

BEGAN: Boundary Equilibrium Generative Adversarial Networks

TLDR
This work proposes a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks, which provides a new approximate convergence measure, fast and stable training and high visual quality.

Improved Training of Wasserstein GANs

TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.

Adversarial Learning for Neural Dialogue Generation

TLDR
This work applies adversarial training to open-domain dialogue generation, training a system to produce sequences that are indistinguishable from human-generated dialogue utterances, and investigates models for adversarial evaluation that uses success in fooling an adversary as a dialogue evaluation metric, while avoiding a number of potential pitfalls.

Deep Reinforcement Learning for Dialogue Generation

TLDR
This work simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity, non-repetitive turns, coherence, and ease of answering.

A Diversity-Promoting Objective Function for Neural Conversation Models

TLDR
This work proposes using Maximum Mutual Information (MMI) as the objective function in neural models, and demonstrates that the proposed MMI models produce more diverse, interesting, and appropriate responses, yielding substantive gains in BLEU scores on two conversational datasets and in human evaluations.

Adaptive Subgradient Methods for Online Learning and Stochastic Optimization

TLDR
This work describes and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal functions that can be chosen in hindsight.

Energy-based Generative Adversarial Networks

An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation

TLDR
An Auto-Encoder Matching (AEM) model is proposed to learn utterance-level semantic dependency and is capable of generating responses of high coherence and fluency compared to baseline models.

A Skeleton-Based Model for Promoting Coherence Among Sentences in Narrative Story Generation

TLDR
A skeleton-based model that first generates the most critical phrases, called skeleton, and then expands the skeleton to a complete and fluent sentence, which can generate significantly more coherent text according to human evaluation and automatic evaluation.