Corpus ID: 236428620

Learn to Focus: Hierarchical Dynamic Copy Network for Dialogue State Tracking

  title={Learn to Focus: Hierarchical Dynamic Copy Network for Dialogue State Tracking},
  author={Linhao Zhang and Houfeng Wang},
Recently, researchers have explored using the encoder-decoder framework to tackle dialogue state tracking (DST), which is a key component of task-oriented dialogue systems. However, they regard a multi-turn dialogue as a flat sequence, failing to focus on useful information when the sequence is long. In this paper, we propose a Hierarchical Dynamic Copy Network (HDCN) to facilitate focusing on the most informative turn, making it easier to extract slot values from the dialogue context. Based on… Expand

Figures and Tables from this paper


Scalable Neural Dialogue State Tracking
This paper proposes an innovative neural model for dialogue state tracking, named Global encoder and Slot-Attentive decoders (G-SAT), which can predict the dialogue state with a very low latency time, while maintaining high-level performance. Expand
Efficient Dialogue State Tracking by Selectively Overwriting Memory
The accuracy gaps between the current and the ground truth-given situations are analyzed and it is suggested that it is a promising direction to improve state operation prediction to boost the DST performance. Expand
Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation
This paper investigates how to approach DST using a generation framework without the pre-defined ontology list, where each turn of user utterance and system response is directly generated by applying a hierarchical encoder-decoder structure. Expand
Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems
A Transferable Dialogue State Generator (TRADE) that generates dialogue states from utterances using copy mechanism, facilitating transfer when predicting (domain, slot, value) triplets not encountered during training. Expand
Global-Locally Self-Attentive Encoder for Dialogue State Tracking
This paper proposes the Global-Locally Self-Attentive Dialogue State Tracker (GLAD), which learns representations of the user utterance and previous system actions with global-local modules and shows that this significantly improves tracking of rare states. Expand
MultiWOZ 2.1: Multi-Domain Dialogue State Corrections and State Tracking Baselines
This work uses crowdsourced workers to fix the state annotations and utterances in the original version of the MultiWOZ data, hoping that this dataset resource will allow for more effective dialogue state tracking models to be built in the future. Expand
A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues
A neural network-based generative architecture, with latent stochastic variables that span a variable number of time steps, that improves upon recently proposed models and that the latent variables facilitate the generation of long outputs and maintain the context. Expand
Neural Belief Tracker: Data-Driven Dialogue State Tracking
This work proposes a novel Neural Belief Tracking (NBT) framework which overcomes past limitations, matching the performance of state-of-the-art models which rely on hand-crafted semantic lexicons and outperforming them when such lexicons are not provided. Expand
Dialog State Tracking: A Neural Reading Comprehension Approach
This work forms dialog state tracking as a reading comprehension task to answer the question what is the state of the current belief state of a dialog after reading conversational context, and uses a simple attention-based neural network to point to the slot values within the conversation. Expand
An End-to-end Approach for Handling Unknown Slot Values in Dialogue State Tracking
An E2E architecture based on the pointer network (PtrNet) that can effectively extract unknown slot values while still obtains state-of-the-art accuracy on the standard DSTC2 benchmark is described. Expand