In-Context Learning for Few-Shot Dialogue State Tracking

@article{Hu2022InContextLF,
  title={In-Context Learning for Few-Shot Dialogue State Tracking},
  author={Yushi Hu and Chia-Hsuan Lee and Tianbao Xie and Tao Yu and Noah A. Smith and Mari Ostendorf},
  journal={ArXiv},
  year={2022},
  volume={abs/2203.08568}
}
Collecting and annotating task-oriented dialogues is time-consuming and costly. Thus, zero and few shot learning for dialogue tasks presents an exciting opportunity. In this work, we propose an in-context (IC) learning framework for zero-shot and few-shot learning dialogue state tracking (DST), where a large pretrained language model (LM) takes a test instance and a few exemplars as input, and directly decodes the dialogue state without any parameter updates . This approach is more flexible and… 

Figures and Tables from this paper

DiSTRICT: Dialogue State Tracking with Retriever Driven In-Context Tuning

DiSTRICT is proposed, a generalizable in-context tuning approach for DST that retrieves highly relevant training examples for a given dialogue to tune the model without any hand-crafted templates, thereby pro-viding an important advantage for real-world deployments that often have limited resource availability.

Dialogue State Tracking with Zero-Shot and Few-Shot Learning for Generalization: A Review

Recent studies on DST with zero-shot and few-shot learning are reviewed and characteristics of each model are described and the performance of the model experimented under the same conditions is summarized.

Dialogic: Controllable Dialogue Simulation with In-Context Learning

Experimental results on the MultiWOZ dataset demonstrate that training a model on the simulated dialogues leads to even better performance than using the same amount of human-generated dialogues under the challenging low-resource settings, with as few as 85 dialogues as a seed.

MetaASSIST: Robust Dialogue State Tracking with Meta Learning

A meta learning-based framework MetaASSIST is proposed to adaptively learn the weighting parameter, which achieves a state-of-the-art joint goal accuracy of 80.10% on MultiWOZ 2.4.

“Do you follow me?”: A Survey of Recent Approaches in Dialogue State Tracking

It is argued that some critical aspects of dialogue systems such as generalizability are still underexplored and to motivate future studies, several research avenues are proposed.

ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback

A progressive zero-shot dataset generation framework, P RO G EN, which leverages the feedback from the task-specific model to guide the generation of new training data via in- context examples, and achieves on-par or superior performance with only 1% synthetic dataset size compared to baseline methods without in-context feedback.

Improving In-Context Few-Shot Learning via Self-Supervised Training

This paper proposes to use self-supervision in an intermediate training stage between pretraining and downstream few-shot usage with the goal to teach the model to perform in-context few shot learning.

PromptCap: Prompt-Guided Task-Aware Image Captioning

Image captioning aims to describe an image with a natural language sentence, allowing powerful language models to understand images. The framework of combining image captioning with language models

Mediators: Conversational Agents Explaining NLP Model Behavior

Desiderata for Mediators, textbased conversational agents which are capable of explaining the behavior of neural models interactively using natural language, are established from the perspective of natural language processing research.

Binding Language Models in Symbolic Languages

the Q parsed INDER find parsing

References

SHOWING 1-10 OF 65 REFERENCES

Dialogue Summaries as Dialogue States (DS2), Template-Guided Summarization for Few-shot Dialogue State Tracking

It is hypothesized that dialogue summaries are essentially unstructured dialogue states; hence, it is proposed to reformulate dialogue state tracking as a dialogue summarization problem, and the method DS2 outperforms previous works on few-shot DST in MultiWoZ 2.0 and 2.1.

Zero-Shot Dialogue State Tracking via Cross-Task Transfer

This work proposes TransferQA, a transferable generative QA model that seamlessly combines extractive QA and multi-choice QA via a text-to-text transformer framework, and tracks both categorical slots and non-categorical slots in DST.

Few-Shot Bot: Prompt-Based Learning for Dialogue Systems

An end-to-end chatbot named the Few-Shot Bot is created, which automatically selects the most appropriate conversational skill, queries different knowledge bases or the internet, and uses the retrieved knowledge to generate a human-like response, all using only few dialogue examples per skill.

MinTL: Minimalist Transfer Learning for Task-Oriented Dialogue Systems

This paper introduces Levenshtein belief spans (Lev), that allows efficient dialogue state tracking with a minimal generation length, and greatly improves the inference efficiency of MinTL-based systems.

BERT-DST: Scalable End-to-End Dialogue State Tracking with Bidirectional Encoder Representations from Transformer

Empirical evaluation shows BERT-DST with cross-slot parameter sharing outperforms prior work on the benchmark scalable DST datasets Sim-M and Sim-R, and achieves competitive performance on the standard DSTC2 and WOZ 2.0 datasets.

Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems

A Transferable Dialogue State Generator (TRADE) that generates dialogue states from utterances using copy mechanism, facilitating transfer when predicting (domain, slot, value) triplets not encountered during training.

Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System

This study presents PPTOD, a unified plug-and-play model for task-oriented dialogue, and introduces a new dialogue multi-task pre-training strategy that allows the model to learn the primary TOD task completion skills from heterogeneous dialog corpora.

Efficient Dialogue State Tracking by Selectively Overwriting Memory

The accuracy gaps between the current and the ground truth-given situations are analyzed and it is suggested that it is a promising direction to improve state operation prediction to boost the DST performance.

Dialogue State Tracking with a Language Model using Schema-Driven Prompting

A new variation of the language modeling approach that uses schema-driven prompting to provide task-aware history encoding that is used for both categorical and non-categorical slots is introduced.

From Machine Reading Comprehension to Dialogue State Tracking: Bridging the Gap

This paper proposes using machine reading comprehension (RC) in state tracking from two perspectives: model architectures and datasets, and divides the slot types in dialogue state into categorical or extractive to borrow the advantages from both multiple-choice and span-based reading comprehension models.
...