Generating Relevant and Coherent Dialogue Responses using Self-Separated Conditional Variational AutoEncoders

@article{Sun2021GeneratingRA,
  title={Generating Relevant and Coherent Dialogue Responses using Self-Separated Conditional Variational AutoEncoders},
  author={Bin Sun and Shaoxiong Feng and Yiwei Li and Jiamou Liu and Kan Li},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.03410}
}
Conditional Variational AutoEncoder (CVAE) effectively increases the diversity and informativeness of responses in open-ended dialogue generation tasks through enriching the context vector with sampled latent variables. However, due to the inherent one-to-many and many-to-one phenomena in human dialogues, the sampled latent variables may not correctly reflect the contexts’ semantics, leading to irrelevant and incoherent generated responses. To resolve this problem, we propose Self-separated… 

Figures and Tables from this paper

PCVAE: Generating Prior Context for Dialogue Response Generation

This work presents Prior Context VAE (PCVAE), a hierarchical VAE that learns prior context from data automatically for dialogue generation and proposes Autoregressive Compatible Arrangement (ACA) that enables modeling prior context in autoregressive style, which is crucial for selecting appropriate prior context according to a given context.

Stop Filtering: Multi-View Attribute-Enhanced Dialogue Learning

A multi-view attribute-enhanced dialogue learning framework that strengthens the attribute-related features more robustly and comprehensively and can improve the performance of models by enhancing dialogue attributes and fusing view-specific knowledge.

A Speaker-aware Parallel Hierarchical Attentive Encoder-Decoder Model for Multi-turn Dialogue Generation

A speaker-aware Parallel Hierarchical Attentive Encoder-Decoder (PHAED) model that aims to model each utterance with the awareness of its speaker and contextual associations with the same speaker’s previous messages is proposed.

Hierarchical Inductive Transfer for Continual Dialogue Learning

A hierarchical inductive transfer framework is proposed that enables new tasks to use general knowledge in the base adapter without being misled by diverse knowledge in task-specific adapters and obtains comparable performance under deployment-friendly model capacity.

Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors

A novel latent structured variable model to generate high quality texts by en-riching contextual representation learning of encoder-decoder models is presented and an variational inference approach to approximate the posterior distribution of random context variables is proposed.

An Empirical Study on the Overlapping Problem of Open-Domain Dialogue Datasets

This work observes the overlapping problem in DailyDialog and OpenSubtitles, two popular open-domain dialogue benchmark datasets, and shows that such overlapping can be exploited to obtain fake state-of-the-art performance.

A Response Generator with Response-Aware Encoder for Generating Speci c and Relevant Responses

A sequence-to-sequence response generator with a response-aware encoder that exploits golden responses by reflecting them into a query representation and the joint learning of a teacher and a student relevancy scorer is adopted.

PEVAE: A Hierarchical VAE for Personalized Explainable Recommendation.

PErsonalized VAE (PEVAE) is presented that generates personalized natural language explanations for explainable recommendation that can enjoy the advantage of overcoming the sparsity of data while generating more personalized explanations for a user with relatively sufficient training samples.

A-TIP: Attribute-aware Text Infilling via Pre-trained Language Model

This paper designs a unified text infilling component with modified attention mechanisms and intra- and inter-blank positional encoding to better perceive the number of blanks and the infilling length for each blank and proposes a plug-and-play discriminator to guide generation towards the direction of improving attribute relevance without decreasing text fluency.

References

SHOWING 1-10 OF 52 REFERENCES

DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder

DialogWAE is proposed, a conditional Wasserstein autoencoder specially designed for dialogue modeling that models the distribution of data by training a GAN within the latent variable space and develops a Gaussian mixture prior network to enrich the latent space.

Group-wise Contrastive Learning for Neural Dialogue Generation

This work introduces contrastive learning into dialogue generation, where the model explicitly perceives the difference between the well-chosen positive and negative utterances, and augments contrastive dialogue learning with group-wise dual sampling.

Better Conversations by Modeling, Filtering, and Optimizing for Coherence and Diversity

A measure of coherence is introduced as the GloVe embedding similarity between the dialogue context and the generated response to improve coherence and diversity in encoder-decoder models for open-domain conversational agents.

Self-Supervised Dialogue Learning

A self-supervised learning task, inconsistent order detection, to explicitly capture the flow of conversation in dialogues, and a joint learning framework where SSN can guide the dialogue systems towards more coherent and relevant dialogue learning through adversarial training.

Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders

This work presents a novel framework based on conditional variational autoencoders that capture the discourse-level diversity in the encoder and uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders.

Generating More Interesting Responses in Neural Conversation Models with Distributional Constraints

This work proposes a simple yet effective approach for incorporating side information in the form of distributional constraints over the generated responses that generates responses that are much less generic without sacrificing plausibility.

Reinforcing Coherence for Sequence to Sequence Model in Dialogue Generation

Three different types of coherence models, including an unlearned similarity function, a pretrained semantic matching function, and an end-to-end dual learning architecture, are proposed in this paper, showing that the proposed models produce more specific and meaningful responses, yielding better performances against Seq2Seq models in terms of both metric-based and human evaluations.

Hierarchical Variational Memory Network for Dialogue Generation

A novel hierarchical variational memory network (HVMN) is proposed, by adding the hierarchical structure and the variationalMemory network into a neural encoder-decoder network that can capture both the high-level abstract variations and long-term memories during dialogue tracking, which enables the random access of relevant dialogue histories.

A Conditional Variational Framework for Dialog Generation

This paper proposes a framework allowing conditional response generation based on specific attributes, which can be either manually assigned or automatically detected and validated on two different scenarios, where the attribute refers to genericness and sentiment states respectively.

Jointly Optimizing Diversity and Relevance in Neural Response Generation

A SpaceFusion model to jointly optimize diversity and relevance that essentially fuses the latent space of a sequence-to-sequence model and that of an autoencoder model by leveraging novel regularization terms is proposed.
...