THINK: A Novel Conversation Model for Generating Grammatically Correct and Coherent Responses

@article{Sun2021THINKAN,
  title={THINK: A Novel Conversation Model for Generating Grammatically Correct and Coherent Responses},
  author={Bin Sun and Shaoxiong Feng and Yiwei Li and Jiamou Liu and Kan Li},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.13630}
}

References

SHOWING 1-10 OF 51 REFERENCES

ReCoSa: Detecting the Relevant Contexts with Self-Attention for Multi-turn Dialogue Generation

Experimental results on both Chinese customer services dataset and English Ubuntu dialogue dataset show that ReCoSa significantly outperforms baseline models, in terms of both metric-based and human evaluations.

Generating Relevant and Coherent Dialogue Responses using Self-Separated Conditional Variational AutoEncoders

Self-separated Conditional Variational AutoEncoder (abbreviated as SepaCVAE) is proposed that introduces group information to regularize the latent variables, which enhances CVAE by improving the responses’ relevance and coherence while maintaining their diversity and informativeness.

Better Conversations by Modeling, Filtering, and Optimizing for Coherence and Diversity

A measure of coherence is introduced as the GloVe embedding similarity between the dialogue context and the generated response to improve coherence and diversity in encoder-decoder models for open-domain conversational agents.

Reinforcing Coherence for Sequence to Sequence Model in Dialogue Generation

Three different types of coherence models, including an unlearned similarity function, a pretrained semantic matching function, and an end-to-end dual learning architecture, are proposed in this paper, showing that the proposed models produce more specific and meaningful responses, yielding better performances against Seq2Seq models in terms of both metric-based and human evaluations.

Don’t Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training

This work shows how all of the problems of generative dialogue models can be addressed by extending the recently introduced unlikelihood loss to these cases, and demonstrates the efficacy of this approach across several dialogue tasks.

Get The Point of My Utterance! Learning Towards Effective Responses with Multi-Head Attention Mechanism

A novel Multi-Head Attention Mechanism (MHAM) for generative dialog systems, which aims at capturing multiple semantic aspects from the user utterance is proposed, and a regularizer is formulated to force different attention heads to concentrate on certain aspects.

Deep Reinforcement Learning for Dialogue Generation

This work simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity, non-repetitive turns, coherence, and ease of answering.

Chameleons in Imagined Conversations: A New Approach to Understanding Coordination of Linguistic Style in Dialogs

It is argued that fictional dialogs offer a way to study how authors create the conversations but don't receive the social benefits (rather, the imagined characters do), and significant coordination across many families of function words in the large movie-script corpus is found.

DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder

DialogWAE is proposed, a conditional Wasserstein autoencoder specially designed for dialogue modeling that models the distribution of data by training a GAN within the latent variable space and develops a Gaussian mixture prior network to enrich the latent space.

A Knowledge-Grounded Neural Conversation Model

This paper presents a novel, fully data-driven, and knowledge-grounded neural conversation model aimed at producing more contentful responses, generalizing the widely-used Sequence-to-Sequence (Seq2Seq) approach by conditioning responses on both conversation history and external “facts”, allowing the model to be versatile and applicable in an open-domain setting.
...