Posterior-GAN: Towards Informative and Coherent Response Generation with Posterior Generative Adversarial Network

@inproceedings{Feng2020PosteriorGANTI,
  title={Posterior-GAN: Towards Informative and Coherent Response Generation with Posterior Generative Adversarial Network},
  author={Shaoxiong Feng and Hongshen Chen and Kan Li and Dawei Yin},
  booktitle={AAAI},
  year={2020}
}
Neural conversational models learn to generate responses by taking into account the dialog history. These models are typically optimized over the query-response pairs with a maximum likelihood estimation objective. However, the query-response tuples are naturally loosely coupled, and there exist multiple responses that can respond to a given query, which leads the conversational model learning burdensome. Besides, the general dull response problem is even worsened when the model is confronted… 

Figures and Tables from this paper

LocalGAN: Modeling Local Distributions for Adversarial Response Generation

This paper presents a new methodology for modeling the local semantic distribution of responses to a given query in the human-conversation corpus, and on this basis, explores a specified adversarial

GTM: A Generative Triple-wise Model for Conversational Question Generation

Experimental results show that the proposed generative triple-wise model with hierarchical variations for open-domain conversational question generation (CQG) significantly improves the quality of questions in terms of fluency, coherence and diversity over competitive baselines.

PCVAE: Generating Prior Context for Dialogue Response Generation

This work presents Prior Context VAE (PCVAE), a hierarchical VAE that learns prior context from data automatically for dialogue generation and proposes Autoregressive Compatible Arrangement (ACA) that enables modeling prior context in autoregressive style, which is crucial for selecting appropriate prior context according to a given context.

Generating Relevant and Coherent Dialogue Responses using Self-Separated Conditional Variational AutoEncoders

Self-separated Conditional Variational AutoEncoder (abbreviated as SepaCVAE) is proposed that introduces group information to regularize the latent variables, which enhances CVAE by improving the responses’ relevance and coherence while maintaining their diversity and informativeness.

Neural Network With Hierarchical Attention Mechanism for Contextual Topic Dialogue Generation

This work improves upon existing models and attention mechanisms and proposes a new hierarchical model to better solve the problem of dialogue context (the HAT model), which enables the model to obtain more contextual information when processing and improves the ability of the model in terms of contextual relevance to produce high-quality responses.

Neural Dialogue Generation Methods in Open Domain: A Survey

This survey elaborated the research history of these existing generative methods, and roughly divided them into six categories, i.e., Encoder-Decoder framework-based methods, Hierarchical Recurrent EncodedDecoder (HRED) based methods, Variational Autoencoder (VAE)-basedmethods, Reinforcement Learning (RL)based methods, GenerativeAdversarial Network (GAN)-based Methods, and pretraining-model-based Methods.

ProphetChat: Enhancing Dialogue Generation with Simulation of Future Conversation

A novel dialogue generation framework named ProphetChat is proposed that utilizes the simulated dialogue futures in the inference phase to enhance response generation and demonstrates that ProphetChat can generate better responses over strong baselines, which validates the advantages of incorporating the simulatedDialogue futures.

Stop Filtering: Multi-View Attribute-Enhanced Dialogue Learning

A multi-view attribute-enhanced dialogue learning framework that strengthens the attribute-related features more robustly and comprehensively and can improve the performance of models by enhancing dialogue attributes and fusing view-specific knowledge.

Regularizing Dialogue Generation by Imitating Implicit Scenarios

This work proposes to improve generative dialogue systems from the scenario perspective, where both dialogue history and future conversation are taken into account to implicitly reconstruct the scenario knowledge.

References

SHOWING 1-10 OF 35 REFERENCES

DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder

DialogWAE is proposed, a conditional Wasserstein autoencoder specially designed for dialogue modeling that models the distribution of data by training a GAN within the latent variable space and develops a Gaussian mixture prior network to enrich the latent space.

Neural Response Generation via GAN with an Approximate Embedding Layer

The proposed GAN setup provides an effective way to avoid noninformative responses (a.k.a “safe responses”) in traditional neural response generators, and significantly outperforms existing neural response generation models in diversity metrics.

Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders

This work presents a novel framework based on conditional variational autoencoders that capture the discourse-level diversity in the encoder and uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders.

Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization

Adversarial Information Maximization (AIM), an adversarial learning framework that addresses informativeness and diversity, and explicitly optimizes a variational lower bound on pairwise mutual information between query and response.

SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update.

Adversarial Learning for Neural Dialogue Generation

This work applies adversarial training to open-domain dialogue generation, training a system to produce sequences that are indistinguishable from human-generated dialogue utterances, and investigates models for adversarial evaluation that uses success in fooling an adversary as a dialogue evaluation metric, while avoiding a number of potential pitfalls.

Diversity-Promoting GAN: A Cross-Entropy Based Generative Adversarial Network for Diversified Text Generation

A novel language-model based discriminator is proposed, which can better distinguish novel text from repeated text without the saturation problem compared with existing classifier-based discriminators.

Reinforcing Coherence for Sequence to Sequence Model in Dialogue Generation

Three different types of coherence models, including an unlearned similarity function, a pretrained semantic matching function, and an end-to-end dual learning architecture, are proposed in this paper, showing that the proposed models produce more specific and meaningful responses, yielding better performances against Seq2Seq models in terms of both metric-based and human evaluations.

MaskGAN: Better Text Generation via Filling in the ______

This work introduces an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context and shows qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.

Improving Variational Encoder-Decoders in Dialogue Generation

A separate VED model is developed that learns to autoencode discrete texts into continuous embeddings and generalize latent representations by reconstructing the encoded embedding using Gaussian noise and multi-layer perceptrons.