Can I Be of Further Assistance? Using Unstructured Knowledge Access to Improve Task-oriented Conversational Modeling

@article{Jin2021CanIB,
  title={Can I Be of Further Assistance? Using Unstructured Knowledge Access to Improve Task-oriented Conversational Modeling},
  author={Di Jin and S. Kim and Dilek Z. Hakkani-T{\"u}r},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.09174}
}
Most prior work on task-oriented dialogue systems are restricted to limited coverage of domain APIs. However, users oftentimes have requests that are out of the scope of these APIs. This work focuses on responding to these beyond-API-coverage user turns by incorporating external, unstructured knowledge sources. Our approach works in a pipelined manner with knowledge-seeking turn detection, knowledge selection, and response generation in sequence. We introduce novel data augmentation methods for… Expand

References

SHOWING 1-10 OF 16 REFERENCES
Beyond Domain APIs: Task-oriented Conversational Modeling with Unstructured Knowledge Access
TLDR
An augmented version of MultiWOZ 2.1 is introduced, which includes new out-of-API-coverage turns and responses grounded on external knowledge sources, and defines three sub-tasks: knowledge-seeking turn detection, knowledge selection, and knowledge-grounded response generation, which can be modeled individually or jointly. Expand
Key-Value Retrieval Networks for Task-Oriented Dialogue
TLDR
This work proposes a new neural dialogue agent that is able to effectively sustain grounded, multi-domain discourse through a novel key-value retrieval mechanism and significantly outperforms a competitive rule-based system and other existing neural dialogue architectures on the provided domains according to both automatic and human evaluation metrics. Expand
Learning to Select External Knowledge with Multi-Scale Negative Sampling
TLDR
This work has explored several advanced techniques to enhance the utilization of external knowledge and boost the quality of response generation, including schema guided knowledge decision, negatives enhanced knowledge selection, and knowledge grounded response generation. Expand
MultiWOZ 2.1: A Consolidated Multi-Domain Dialogue Dataset with State Corrections and State Tracking Baselines
TLDR
This work uses crowdsourced workers to re-annotate state and utterances based on the original utterances in the dataset, and benchmark a number of state-of-the-art dialogue state tracking models on the MultiWOZ 2.1 dataset and show the joint state tracking performance on the corrected state annotations. Expand
Language Models are Unsupervised Multitask Learners
TLDR
It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations. Expand
Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning
TLDR
This work introduces Hybrid Code Networks (HCNs), which combine an RNN with domain-specific knowledge encoded as software and system action templates, and considerably reduce the amount of training data required, while retaining the key benefit of inferring a latent representation of dialog state. Expand
Transformers: State-of-the-Art Natural Language Processing
TLDR
Transformers is an open-source library that consists of carefully engineered state-of-the art Transformer architectures under a unified API and a curated collection of pretrained models made by and available for the community. Expand
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
TLDR
BART is presented, a denoising autoencoder for pretraining sequence-to-sequence models, which matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks. Expand
RoBERTa: A Robustly Optimized BERT Pretraining Approach
TLDR
It is found that BERT was significantly undertrained, and can match or exceed the performance of every model published after it, and the best model achieves state-of-the-art results on GLUE, RACE and SQuAD. Expand
Learning Fine-Grained Image Similarity with Deep Ranking
  • Jiang Wang, Yang Song, +5 authors Y. Wu
  • Computer Science
  • 2014 IEEE Conference on Computer Vision and Pattern Recognition
  • 2014
TLDR
A deep ranking model that employs deep learning techniques to learn similarity metric directly from images has higher learning capability than models based on hand-crafted features and deep classification models. Expand
...
1
2
...