Modeling Long Context for Task-Oriented Dialogue State Generation

@article{Quan2020ModelingLC,
  title={Modeling Long Context for Task-Oriented Dialogue State Generation},
  author={Jun Quan and Deyi Xiong},
  journal={ArXiv},
  year={2020},
  volume={abs/2004.14080}
}
Based on the recently proposed transferable dialogue state generator (TRADE) that predicts dialogue states from utterance-concatenated dialogue context, we propose a multi-task learning model with a simple yet effective utterance tagging technique and a bidirectional language model as an auxiliary task for task-oriented dialogue state generation. By enabling the model to learn a better representation of the long dialogue context, our approaches attempt to solve the problem that the performance… 

Figures and Tables from this paper

LUNA: Learning Slot-Turn Alignment for Dialogue State Tracking
TLDR
LUNA explicitly aligns each slot with its most relevant utterance, then further pre-dicts the corresponding value based on this aligned utterance instead of all dialogue utterances.
Contextual Semantic Parsing for Multilingual Task-Oriented Dialogues
TLDR
This paper shows that given a large-scale dialogue data set in one language, it can automatically produce an effective semantic parser for other languages using machine translation and proposes automatic translation of dialogue datasets with alignment to ensure faithful translation of slot values and eliminate costly human supervision.
Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey
TLDR
This survey is the most comprehensive and up-to-date one at present in the area of dialogue systems and dialogue-related tasks, extensively covering the popular frameworks, topics, and datasets.
RiSAWOZ: A Large-Scale Multi-Domain Wizard-of-Oz Dataset with Rich Semantic Annotations for Task-Oriented Dialogue Modeling
TLDR
RiSAWOZ is a large-scale multi-domain Chinese Wizard-of-Oz dataset with Rich Semantic Annotations, which contains 11.2K human-to-human multi-turn semantically annotated dialogues, with more than 150K utterances spanning over 12 domains, which is larger than all previous annotated H2H conversational datasets.
Dual Slot Selector via Local Reliability Verification for Dialogue State Tracking
TLDR
The two-stage DSS-DST which consists of the Dual Slot Selector based on the current turn dialogue, and the Slot Value Generator based onThe dialogue history achieves a new state-of-the-art performance with significant improvements.
Counterfactual Matters: Intrinsic Probing For Dialogue State Tracking
TLDR
The findings are: the performance variance of generative DSTs is not only due to the model structure itself, but can be attributed to the distribution of cross-domain values.

References

SHOWING 1-10 OF 19 REFERENCES
Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems
TLDR
A Transferable Dialogue State Generator (TRADE) that generates dialogue states from utterances using copy mechanism, facilitating transfer when predicting (domain, slot, value) triplets not encountered during training.
Non-Autoregressive Dialog State Tracking
TLDR
A novel framework of Non-Autoregressive Dialog State Tracking (NADST) which can factor in potential dependencies among domains and slots to optimize the models towards better prediction of dialogue states as a complete set rather than separate slots is proposed.
Towards Universal Dialogue State Tracking
TLDR
The proposed StateNet is a universal dialogue state tracker that is independent of the number of values, shares parameters across all slots, and uses pre-trained word vectors instead of explicit semantic dictionaries.
Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation
TLDR
This paper investigates how to approach DST using a generation framework without the pre-defined ontology list, where each turn of user utterance and system response is directly generated by applying a hierarchical encoder-decoder structure.
Global-Locally Self-Attentive Encoder for Dialogue State Tracking
TLDR
This paper proposes the Global-Locally Self-Attentive Dialogue State Tracker (GLAD), which learns representations of the user utterance and previous system actions with global-local modules and shows that this significantly improves tracking of rare states.
A Network-based End-to-End Trainable Task-oriented Dialogue System
TLDR
This work introduces a neural network-based text-in, text-out end-to-end trainable goal-oriented dialogue system along with a new way of collecting dialogue data based on a novel pipe-lined Wizard-of-Oz framework that can converse with human subjects naturally whilst helping them to accomplish tasks in a restaurant search domain.
Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing
TLDR
A novel approach is introduced that fully utilizes semantic similarity between dialogue utterances and the ontology terms, allowing the information to be shared across domains, and demonstrates great capability in handling multi-domain dialogues.
Efficient Dialogue State Tracking by Selectively Overwriting Memory
TLDR
The accuracy gaps between the current and the ground truth-given situations are analyzed and it is suggested that it is a promising direction to improve state operation prediction to boost the DST performance.
Multi-domain Dialogue State Tracking as Dynamic Knowledge Graph Enhanced Question Answering
TLDR
This paper proposes to model multi-domain dialogue state tracking as a question answering problem, referred to as Dialogue State Tracking via Question Answering (DSTQA), and uses a dynamically-evolving knowledge graph to explicitly learn relationships between (domain, slot) pairs.
Toward Scalable Neural Dialogue State Tracking Model
TLDR
A new scalable and accurate neural dialogue state tracking model, based on the recently proposed Global-Local Self-Attention encoder (GLAD) model, which reduces the latency in training and inference times by $35 on average, while preserving performance of belief state tracking.
...
...