Manual-Guided Dialogue for Flexible Conversational Agents

@article{Takanobu2022ManualGuidedDF,
  title={Manual-Guided Dialogue for Flexible Conversational Agents},
  author={Ryuichi Takanobu and Hao Zhou and Yankai Lin and Peng Li and Jie Zhou and Minlie Huang},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.07597}
}
How to build and use dialogue data efficiently, and how to deploy models in different domains at scale can be two critical issues in building a task-oriented dialogue system. In this paper, we propose a novel manual-guided dialogue scheme to alleviate these problems, where the agent learns the tasks from both dialogue and manuals. The manual is an unstructured textual document that guides the agent in interacting with users and the database during the conversation. Our proposed scheme reduces… 

References

SHOWING 1-10 OF 38 REFERENCES

Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset

This work introduces the the Schema-Guided Dialogue (SGD) dataset, containing over 16k multi-domain conversations spanning 16 domains, and presents a schema-guided paradigm for task-oriented dialogue, in which predictions are made over a dynamic set of intents and slots provided as input.

Alexa Conversations: An Extensible Data-driven Approach for Building Task-oriented Dialogue Systems

This work presents Alexa Conversations, a new approach for building goal-oriented dialogue systems that is scalable, extensible as well as data efficient, and provides out-of-the-box support for natural conversational phenomenon like entity sharing across turns or users changing their mind during conversation without requiring developers to provide any such dialogue flows.

Frames: a corpus for adding memory to goal-oriented dialogue systems

A rule-based baseline is proposed and the frame tracking task is proposed, which consists of keeping track of different semantic frames throughout each dialogue, and the task is analysed through this baseline.

Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems

A Transferable Dialogue State Generator (TRADE) that generates dialogue states from utterances using copy mechanism, facilitating transfer when predicting (domain, slot, value) triplets not encountered during training.

Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems

The Action-Based Conversations Dataset (ABCD), a fully-labeled dataset with over 10K human-to-human dialogues containing 55 distinct user intents requiring unique sequences of actions constrained by policies to achieve task success, is introduced.

A Simple Language Model for Task-Oriented Dialogue

SimpleTOD is a simple approach to task-oriented dialogue that uses a single causal language model trained on all sub-tasks recast as a single sequence prediction problem, which allows it to fully leverage transfer learning from pre-trained, open domain, causal language models such as GPT-2.

Multi-domain Dialogue State Tracking as Dynamic Knowledge Graph Enhanced Question Answering

This paper proposes to model multi-domain dialogue state tracking as a question answering problem, referred to as Dialogue State Tracking via Question Answering (DSTQA), and uses a dynamically-evolving knowledge graph to explicitly learn relationships between (domain, slot) pairs.

Key-Value Retrieval Networks for Task-Oriented Dialogue

This work proposes a new neural dialogue agent that is able to effectively sustain grounded, multi-domain discourse through a novel key-value retrieval mechanism and significantly outperforms a competitive rule-based system and other existing neural dialogue architectures on the provided domains according to both automatic and human evaluation metrics.

Doc2Dial: A Goal-Oriented Document-Grounded Dialogue Dataset

We introduce doc2dial, a new dataset of goal-oriented dialogues that are grounded in the associated documents. Inspired by how the authors compose documents for guiding end users, we first construct

MultiWOZ 2.3: A multi-domain task-oriented dataset enhanced with annotation corrections and co-reference annotation

This paper introduces MultiWOZ 2.3, in which it differentiate incorrect annotations in dialogue acts from dialogue states, and identifies a lack of co-reference when publishing the updated dataset, to ensure consistency between dialogue acts and dialogue states.