Span-ConveRT: Few-shot Span Extraction for Dialog with Pretrained Conversational Representations

@article{Coope2020SpanConveRTFS,
  title={Span-ConveRT: Few-shot Span Extraction for Dialog with Pretrained Conversational Representations},
  author={Sam Coope and Tyler Farghly and Daniel Gerz and Ivan Vulic and Matthew Henderson},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.08866}
}
We introduce Span-ConveRT, a light-weight model for dialog slot-filling which frames the task as a turn-based span extraction task. This formulation allows for a simple integration of conversational knowledge coded in large pretrained conversational models such as ConveRT (Henderson et al., 2019). We show that leveraging such knowledge in Span-ConveRT is especially useful for few-shot learning scenarios: we report consistent gains over 1) a span extractor that trains representations from… 

ConVEx: Data-Efficient and Few-Shot Slot Labeling

ConVEx’s reduced pretraining times and cost, along with its efficient fine-tuning and strong performance, promise wider portability and scalability for data-efficient sequence-labeling tasks in general.

SPACE-2: Tree-Structured Semi-Supervised Contrastive Pre-training for Task-Oriented Dialog Understanding

Pre-training methods with contrastive learning objectives have shown remarkable success in dialog understanding tasks. However, current contrastive learning solely considers the self-augmented dialog

A Simple But Effective Approach to n-shot Task-Oriented Dialogue Augmentation

This work introduces a framework, that creates synthetic task-oriented dialogues in a fully automatic manner, which operates with input sizes of as small as a few dialogues, and concludes that this end-toend dialogue augmentation framework can be a crucial tool for natural language understanding performance in emerging task- oriented dialogue domains.

Language Model is all You Need: Natural Language Understanding as Question Answering

This work map Natural Language Understanding (NLU) problems to Question Answering (QA) problems and shows that in low data regimes this approach offers significant improvements compared to other approaches to NLU.

DialoGLUE: A Natural Language Understanding Benchmark for Task-Oriented Dialogue

DialoGLUE (Dialogue Language Understanding Evaluation), a public benchmark consisting of 7 task-oriented dialogue datasets covering 4 distinct natural language understanding tasks, is introduced, designed to encourage dialogue research in representation-based transfer, domain adaptation, and sample-efficient task learning.

NLU++: A Multi-Label, Slot-Rich, Generalisable Dataset for Natural Language Understanding in Task-Oriented Dialogue

We present NLU ++, a novel dataset for natural language understanding (NLU) in task-oriented dialogue (ToD) systems, with the aim to provide a much more challenging evaluation environment for

Improved and Efficient Conversational Slot Labeling through Question Answering

This work focuses on modeling and studying slot labeling (SL), a crucialonent of NLU for dialog, through the QA op- 011 tics, aiming to improve both its performance and resilience, and make it more effective and resilient to working with limited task data.

Semantic-based Pre-training for Dialogue Understanding

A semantic-based pre-training framework that extends the standard pre- training framework by three tasks for learning 1) core semantic units, 2) semantic relations and 3) the overall semantic representation according to AMR graphs is proposed.

UniDU: Towards A Unified Generative Dialogue Understanding Framework

This paper reformulates all DU tasks into a unified prompt-based generative model paradigm, named UniDU, and introduces a novel model-agnostic multi-task training strategy (MATS) to dynamically adapt the weights of diverse tasks for best knowlege sharing during training.

Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey

This survey is the most comprehensive and up-to-date one at present in the area of dialogue systems and dialogue-related tasks, extensively covering the popular frameworks, topics, and datasets.

References

SHOWING 1-10 OF 39 REFERENCES

Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset

This work introduces the the Schema-Guided Dialogue (SGD) dataset, containing over 16k multi-domain conversations spanning 16 domains, and presents a schema-guided paradigm for task-oriented dialogue, in which predictions are made over a dynamic set of intents and slots provided as input.

DIET: Lightweight Language Understanding for Dialogue Systems

Large-scale pre-trained language models have shown impressive results on language understanding benchmarks like GLUE and SuperGLUE, improving considerably over other pre-training methods like

Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces

The machine learning architecture of the Snips Voice Platform is presented, a software solution to perform Spoken Language Understanding on microprocessors typical of IoT devices that is fast and accurate while enforcing privacy by design, as no personal user data is ever collected.

Efficient Intent Detection with Dual Sentence Encoders

The usefulness and wide applicability of the proposed intent detectors are demonstrated, showing that they outperform intent detectors based on fine-tuning the full BERT-Large model or using BERT as a fixed black-box encoder on three diverse intent detection data sets.

ConveRT: Efficient and Accurate Conversational Representations from Transformers

The proposed ConveRT (Conversational Representations from Transformers), a pretraining framework for conversational tasks satisfying all the following requirements: it is effective, affordable, and quick to train, and promises wider portability and scalability for Conversational AI applications.

Training Neural Response Selection for Task-Oriented Dialogue Systems

A novel method which pretrains the response selection model on large general-domain conversational corpora and fine-tunes the pretrained model for the target dialogue domain, relying only on the small in-domain dataset to capture the nuances of the given dialogue domain is proposed.

Slot Tagging for Task Oriented Spoken Language Understanding in Human-to-Human Conversation Scenarios

This work extends the task oriented LU problem to human-to-human (H2H) conversations, focusing on the slot tagging task, and explores several variants of a bidirectional LSTM architecture that relies on different knowledge sources, such as Web data, search engine click logs, expert feedback from H2M models, as well as previous utterances in the conversation.

RoBERTa: A Robustly Optimized BERT Pretraining Approach

It is found that BERT was significantly undertrained, and can match or exceed the performance of every model published after it, and the best model achieves state-of-the-art results on GLUE, RACE and SQuAD.

Pretraining Methods for Dialog Context Representation Learning

This paper examines various unsupervised pretraining objectives for learning dialog context representations. Two novel methods of pretraining dialog context encoders are proposed, and a total of four

Unsupervised Data Augmentation

UDA has a small twist in that it makes use of harder and more realistic noise generated by state-of-the-art data augmentation methods, which leads to substantial improvements on six language tasks and three vision tasks even when the labeled set is extremely small.