Reducing Conversational Agents’ Overconfidence Through Linguistic Calibration

@article{Mielke2022ReducingCA,
  title={Reducing Conversational Agents’ Overconfidence Through Linguistic Calibration},
  author={Sabrina J. Mielke and Arthur D. Szlam and Emily Dinan and Y-Lan Boureau},
  journal={Transactions of the Association for Computational Linguistics},
  year={2022},
  volume={10},
  pages={857-872}
}
Abstract While improving neural dialogue agents’ factual accuracy is the object of much research, another important aspect of communication, less studied in the setting of neural dialogue, is transparency about ignorance. In this work, we analyze to what extent state-of-the-art chit-chat models are linguistically calibrated in the sense that their verbalized expression of doubt (or confidence) matches the likelihood that the model’s responses are factually incorrect (or correct). We find that… 

FaithDial: A Faithful Benchmark for Information-Seeking Dialogue

This work creates F AITH D IAL, a new benchmark for hallucination-free dialogues, by editing hallucinated responses in the Wizard of Wikipedia (W O W) benchmark, and benchmark a series of state-of-the-art models and proposes an auxiliary contrastive objective that achieves the highest level of faithfulness and abstractiveness.

The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning

Analysis in three settings shows that explanations judged by humans to be good—logically consistent with the input and the prediction—more likely cooccur with accurate predictions, and trains calibrators using automatically extracted scores that assess the reliability of explanations to improve performance post-hoc.

Teaching Models to Express Their Uncertainty in Words

It is shown that a GPT-3 model can learn to express uncertainty about its own answers in natural language – without use of model logits – and is sensitive to uncertainty in its own Answers, rather than imitating human examples.

Calibrated Interpretation: Confidence Estimation in Semantic Parsing

This work examines the calibration characteristics of six models across three model families on two common English semantic parsing datasets, finding that many models are reasonably well-calibrated and that there is a trade-off between calibration and performance.

References

SHOWING 1-10 OF 44 REFERENCES

Wizard of Wikipedia: Knowledge-Powered Conversational agents

The best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while a new benchmark allows for measuring further improvements in this important research direction.

Don’t Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training

This work shows how all of the problems of generative dialogue models can be addressed by extending the recently introduced unlikelihood loss to these cases, and demonstrates the efficacy of this approach across several dialogue tasks.

What makes a good conversation? How controllable attributes affect human judgments

This work examines two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chit-chat dialogue: repetition, specificity, response-relatedness and question-asking, and shows that by controlling combinations of these variables their models obtain clear improvements in human quality judgments.

Retrieve and Refine: Improved Sequence Generation Models For Dialogue

This work develops a model that combines the two approaches to avoid both their deficiencies: first retrieve a response and then refine it – the final sequence generator treating the retrieval as additional context.

Plug-and-Play Conversational Models

This paper proposes and evaluates plug-and-play methods for controllable response generation, and demonstrates a high degree of control over the generated conversational responses with regard to multiple desired attributes, while being fluent.

DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation

It is shown that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems.

Controlling Style in Generated Dialogue

This work adapts three previously proposed controllable generation architectures to open-domain dialogue generation, controlling the style of the generation to match one among about 200 possible styles, and shows how they can be used to provide insights into existing conversational datasets, and generate a varied set of styled conversation replies.

Recipes for Building an Open-Domain Chatbot

Human evaluations show the best models outperform existing approaches in multi-turn dialogue on engagingness and humanness measurements, and the limitations of this work are discussed by analyzing failure cases of the models.

Recipes for Safety in Open-domain Chatbots

A new human-and-model-in-the-loop framework for both training safer models and for evaluating them, as well as a novel method to distill safety considerations inside generative models without the use of an external classifier at deployment time are introduced.

Can You Put it All Together: Evaluating Conversational Agents’ Ability to Blend Skills

This work investigates several ways to combine models trained towards isolated capabilities, ranging from simple model aggregation schemes that require minimal additional training, to various forms of multi-task training that encompass several skills at all training stages.