Multi-Domain Dialogue Acts and Response Co-Generation
@article{Wang2020MultiDomainDA, title={Multi-Domain Dialogue Acts and Response Co-Generation}, author={Kai Wang and Junfeng Tian and Rui Wang and Xiaojun Quan and Jianxing Yu}, journal={ArXiv}, year={2020}, volume={abs/2004.12363} }
Generating fluent and informative responses is of critical importance for task-oriented dialogue systems. Existing pipeline approaches generally predict multiple dialogue acts first and use them to assist response generation. There are at least two shortcomings with such approaches. First, the inherent structures of multi-domain dialogue acts are neglected. Second, the semantic associations between acts and responses are not taken into account for response generation. To address these issues…
Figures and Tables from this paper
27 Citations
Towards Enriching Responses with Crowd-sourced Knowledge for Task-oriented Dialogue
- Computer ScienceMuCAI @ ACM Multimedia
- 2021
This work designs a neural response generation model EnRG that naturally combines the power of pre-trained GPT-2 in response semantic modeling and the merit of dual attention in making use of the external crowd-sourced knowledge.
Retrieve & Memorize: Dialog Policy Learning with Multi-Action Memory
- Computer ScienceFINDINGS
- 2021
A retrieve-and-memorize framework to enhance the learning of system actions by designing a neural context-aware retrieval module to retrieve multiple candidate system actions from the training set given a dialogue context and proposing a memoryaugmented multi-decoder network to generate the system actions conditioned on the candidate actions.
MMConv: An Environment for Multimodal Conversational Search across Multiple Domains
- Computer ScienceSIGIR
- 2021
The Multimodal Multi-domain Conversational dataset (MMConv) is introduced, a fully annotated collection of human-to-human role-playing dialogues spanning over multiple domains and tasks and adopted the state-of-the-art methods for these tasks respectively.
Modelling Hierarchical Structure between Dialogue Policy and Natural Language Generator with Option Framework for Task-oriented Dialogue System
- Computer ScienceICLR
- 2021
This work proposes modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO, where the latent dialogue act is applied to avoid designing specific dialogue act representations and demonstrates the semantic meanings of latent dialogue acts to show the ability of explanation.
ORIENTED DIALOGUE SYSTEM
- Computer Science
- 2021
This work proposes modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO, where the latent dialogue act is applied to avoid designing specific dialogue act representations, and proposes using a discriminator modelled with language models as an additional reward to improve the comprehensibility.
Transferable Dialogue Systems and User Simulators
- Computer ScienceACL
- 2021
The goal is to develop a modelling framework that can incorporate new dialogue scenarios through self-play between the two agents that is highly effective in bootstrapping the performance of the two agent in transfer learning.
Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey
- Computer ScienceArXiv
- 2021
This survey is the most comprehensive and up-to-date one at present in the area of dialogue systems and dialogue-related tasks, extensively covering the popular frameworks, topics, and datasets.
User Satisfaction Estimation with Sequential Dialogue Act Modeling in Goal-oriented Conversational Systems
- Computer ScienceWWW
- 2022
This paper proposes a novel framework, namely USDA, to incorporate the sequential dynamics of dialogue acts for predicting user satisfaction, by jointly learning User Satisfaction Estimation and Dialogue Act Recognition tasks.
Phrase-Level Action Reinforcement Learning for Neural Dialog Response Generation
- Computer ScienceFINDINGS
- 2021
This paper proposes phrase-level action reinforcement learning (PHRASERL), which allows the model to flexibly alter the sentence structure and content with the sequential action selection, and achieves competitive results with state-of-the-art models on automatic evaluation metrics.
Hierarchical Transformer for Task Oriented Dialog Systems
- Computer ScienceNAACL
- 2021
It is shown how a standard transformer can be morphed into any hierarchical encoder, including HRED and HIBERT like models, by using specially designed attention masks and positional encodings and helps achieve better natural language understanding of the contexts in transformer-based models for task-oriented dialog systems through a wide range of experiments.
References
SHOWING 1-10 OF 35 REFERENCES
Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems
- Computer ScienceACL
- 2019
A Transferable Dialogue State Generator (TRADE) that generates dialogue states from utterances using copy mechanism, facilitating transfer when predicting (domain, slot, value) triplets not encountered during training.
A Modular Task-oriented Dialogue System Using a Neural Mixture-of-Experts
- Computer ScienceArXiv
- 2019
A neural Modular Task-oriented Dialogue System (MTDS) framework, in which a few expert bots are combined to generate the response for a given dialogue context, and a Token-level Mixture-of-Expert (TokenMoE) model to implement MTDS.
Deep Reinforcement Learning for Dialogue Generation
- Computer ScienceEMNLP
- 2016
This work simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity, non-repetitive turns, coherence, and ease of answering.
DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation
- Computer ScienceACL
- 2020
It is shown that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems.
Multi-task Learning for Joint Language Understanding and Dialogue State Tracking
- Computer ScienceSIGDIAL Conference
- 2018
This paper presents a novel approach for multi-task learning of language understanding (LU) and dialogue state tracking (DST) in task-oriented dialogue systems and investigates the use of scheduled sampling on LU output for the current user utterance as well as the DSTOutput for the preceding turn to bridge the gap between training and inference.
MultiWOZ 2.1: Multi-Domain Dialogue State Corrections and State Tracking Baselines
- Computer ScienceArXiv
- 2019
This work uses crowdsourced workers to fix the state annotations and utterances in the original version of the MultiWOZ data, hoping that this dataset resource will allow for more effective dialogue state tracking models to be built in the future.
MultiWOZ 2.1: A Consolidated Multi-Domain Dialogue Dataset with State Corrections and State Tracking Baselines
- Computer ScienceLREC
- 2020
This work uses crowdsourced workers to re-annotate state and utterances based on the original utterances in the dataset, and benchmark a number of state-of-the-art dialogue state tracking models on the MultiWOZ 2.1 dataset and show the joint state tracking performance on the corrected state annotations.
DialogAct2Vec: Towards End-to-End Dialogue Agent by Multi-Task Representation Learning
- Computer ScienceArXiv
- 2019
A novel joint end-to-end model by multi-task representation learning, which can capture the knowledge from heterogeneous information through automatically learning knowledgeable low-dimensional embeddings from data, named with DialogAct2Vec is proposed.
Latent Intention Dialogue Models
- Computer ScienceICML
- 2017
The experimental evaluation of the proposed Latent Intention Dialogue Model shows that the model out-performs published benchmarks for both corpus-based and human evaluation, demonstrating the effectiveness of discrete latent variable models for learning goal-oriented dialogues.
Global-Locally Self-Attentive Dialogue State Tracker
- Computer ScienceACL
- 2018
This paper proposes the Global-Locally Self-Attentive Dialogue State Tracker (GLAD), which learns representations of the user utterance and previous system actions with global-local modules and shows that this significantly improves tracking of rare states.