GOLD: Improving Out-of-Scope Detection in Dialogues using Data Augmentation

  title={GOLD: Improving Out-of-Scope Detection in Dialogues using Data Augmentation},
  author={Derek Chen and Zhou Yu},
Practical dialogue systems require robust methods of detecting out-of-scope (OOS) utterances to avoid conversational breakdowns and related failure modes. Directly training a model with labeled OOS examples yields reasonable performance, but obtaining such data is a resource-intensive process. To tackle this limited-data problem, previous methods focus on better modeling the distribution of in-scope (INS) examples. We introduce GOLD as an orthogonal technique that augments existing data to… 

Figures and Tables from this paper

Pseudo-OOD training for robust language models

A post hoc framework called POORE - POsthoc pseudo Ood REgularization is proposed, that generates pseudo-OOD samples using in-distribution (IND) data, which leads to new state-of-the-art gains on the OOD prediction task during testing.

DG2: Data Augmentation Through Document Grounded Dialogue Generation

An automatic data augmentation technique grounded on documents through a generative dialogue model that consists of a user bot and agent bot that can synthesize diverse dialogues given an input document, which is then used to train a downstream model.

Metric Learning and Adaptive Boundary for Out-of-Domain Detection

This work has designed an OOD detection algorithm independent of OOD data that outperforms a wide range of current state-of-the-art algorithms on publicly available datasets and is based on a simple butcient approach of combining metric learning with adaptive decision boundary.

Induce Spoken Dialog Intents via Deep Unsupervised Context Contrastive Clustering

This work first transforms pretrained LMs into conversational encoders with in-domain dialogs, then conducts context-aware contrastive learning to reveal latent intent semantics via the coherence from dialog contexts, and proposes a novel clustering method to iteratively refine the representation.

Knowledge-Grounded Conversational Data Augmentation with Generative Conversational Networks

The results show that for conversations without knowledge grounding, GCN can generalize from the seed data, producing novel conversations that are less relevant but more engaging and for knowledge-grounded conversations, it can produce more knowledge-focused, fluent, and engaging conversations.

Estimating Soft Labels for Out-of-Domain Intent Detection

This paper proposes an adaptive soft pseudo labeling (ASoul) method that can estimate soft labels for pseudo OOD samples when training OOD detectors and consistently improves the OOD detection performance and outperforms various competitive baselines.

Enhancing Out-of-Distribution Detection in Natural Language Understanding via Implicit Layer Ensemble

A novel framework based on contrastive learning is proposed that encourages intermediate features to learn layer-specialized representations and assembles them implicitly into a single representation to absorb rich information in the pre-trained language model.

Delving into Out-of-Distribution Detection with Vision-Language Representations

This paper proposes Maximum Concept Matching (MCM), a simple yet effective zero-shot OOD detection method based on aligning visual features with textual concepts that achieves superior performance on a wide variety of real-world tasks.

Data Augmentation for Intent Classification

It is found that while certain methods dramatically improve qualitative and quantitative performance, other methods have minimal or even negative impact.

POEM: Out-of-Distribution Detection with Posterior Sampling

A novel posterior sampling-based outlier mining framework, POEM, is proposed, which facilitates efficient use of outlier data and promotes learning a compact decision boundary between ID and OOD data for improved detection.



Contextual Out-of-domain Utterance Handling with Counterfeit Data Augmentation

  • Sungjin LeeIgor Shalyminov
  • Computer Science
    ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2019
The goal of this paper is to propose a novel OOD detection method that does not require OOD data by utilizing counterfeit OOD turns in the context of a dialog by outperforms state-of-the-art dialog models equipped with a conventional Ood detection mechanism by a large margin in the presence of OOD utterances.

Improving Dialogue Breakdown Detection with Semi-Supervised Learning

The use of semi-supervised learning methods to improve dialogue breakdown detection, including continued pre-training on the Reddit dataset and a manifold-based data augmentation method, are investigated.

Out-of-Domain Detection for Natural Language Understanding in Dialog Systems

A novel model is proposed to generate high-quality pseudo OOD samples that are akin to IN-Domain (IND) input utterances and thereby improves the performance of OOD detection and is demonstrated to be effective in NLU.

Likelihood Ratios and Generative Classifiers for Unsupervised Out-of-Domain Detection In Task Oriented Dialog

This work is hitherto the first to investigate the use of generative classifier and computing a marginal likelihood (ratio) for OOD detection at test-time and finds that this approach outperforms both simple likelihood (Ratio) based and other prior approaches.

Automatically Learning Data Augmentation Policies for Dialogue Tasks

This work adapts AutoAugment to automatically discover effective perturbation policies for natural language processing (NLP) tasks such as dialogue generation, and achieves significant improvements over the previous state-of-the-art, including trained on manually-designed policies.

Sequence-to-Sequence Data Augmentation for Dialogue Language Understanding

A sequence-to-sequence generation based data augmentation framework that leverages one utterance’s same semantic alternatives in the training data to produce diverse utterances that help to improve the language understanding module.

Revisiting Mahalanobis Distance for Transformer-Based Out-of-Domain Detection

The broader analysis shows that the reason for success lies in the fact that the fine-tuned Transformer is capable of constructing homogeneous representations of in-domain utterances, revealing geometrical disparity to out of domain utterances and the Mahalanobis distance captures this disparity easily.

Cross-lingual Transfer Learning for Multilingual Task Oriented Dialog

This paper presents a new data set of 57k annotated utterances in English, Spanish, Spanish and Thai and uses this data set to evaluate three different cross-lingual transfer methods, finding that given several hundred training examples in the the target language, the latter two methods outperform translating the training data.

KLOOS: KL Divergence-based Out-of-Scope Intent Detection in Human-to-Machine Conversations

An out-of-scope intent detection method, called KLOOS, based on a novel feature extraction mechanism that incorporates the information accumulation of sequential word processing, which statistically significantly improves out- of-scope sensitivity in all cases.

Task-Oriented Dialogue as Dataflow Synthesis

An approach to task-oriented dialogue in which dialogue state is represented as a dataflow graph, which enables the expression and manipulation of complex user intents, and explicit metacomputation makes these intents easier for learned models to predict.