Generating Relevant and Coherent Dialogue Responses using Self-Separated Conditional Variational AutoEncoders
- Bin Sun, Shaoxiong Feng, Yiwei Li, Jiamou Liu, Kan Li
- Computer ScienceAnnual Meeting of the Association for…
- 7 June 2021
Self-separated Conditional Variational AutoEncoder (abbreviated as SepaCVAE) is proposed that introduces group information to regularize the latent variables, which enhances CVAE by improving the responses’ relevance and coherence while maintaining their diversity and informativeness.
An Option Gate Module for Sentence Inference on Machine Reading Comprehension
- Xuming Lin, Ruifang Liu, Yiwei Li
- Computer ScienceInternational Conference on Information and…
- 17 October 2018
An option gate approach for reading comprehension that applies a sentence-level option gate module to make the model incorporate sentence information and can help better reasoning instead of directly word matching or paraphrasing.
A Multistage Ranking Strategy for Personalized Hotel Recommendation with Human Mobility Data
This paper proposes a personalized multistage pairwise learning-to-ranking model, which can capture more personalized information by utilizing full scenarios hotel click data of users in map applications and can effectively solve the problem of cold start.
Modeling Complex Dialogue Mappings via Sentence Semantic Segmentation Guided Conditional Variational Auto-Encoder
Towards Diverse, Relevant and Coherent Open-Domain Dialogue Generation via Hybrid Latent Variables
Experimental results on two dialogue generation datasets show that CHVT is superior to tradi-tional transformer-based variational mechanism and the beneﬁt of applying HLV to ﬁne-tuning two pre-trained dialogue models (PLATO and BART-base) is demonstrated.
THINK: A Novel Conversation Model for Generating Grammatically Correct and Coherent Responses
Diversifying Neural Dialogue Generation via Negative Distillation
- Yiwei Li, Shaoxiong Feng, Bin Sun, Kan Li
- Computer ScienceNorth American Chapter of the Association for…
- 5 May 2022
This paper proposes a novel negative training paradigm, called negative distillation, to keep the model away from the undesirable generic responses while avoiding the above problems, and shows that this method outperforms previous negative training methods significantly.
Stop Filtering: Multi-View Attribute-Enhanced Dialogue Learning
A multi-view attribute-enhanced dialogue learning framework that strengthens the attribute-related features more robustly and comprehensively and can improve the performance of models by enhancing dialogue attributes and fusing view-speciﬁc knowledge.