The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems

@inproceedings{Ziems2022TheMI,
  title={The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems},
  author={Caleb Ziems and Jane A. Yu and Yi-Chia Wang and Alon Y. Halevy and Diyi Yang},
  booktitle={ACL},
  year={2022}
}
Conversational agents have come increasingly closer to human competence in open-domain dialogue settings; however, such models can reflect insensitive, hurtful, or entirely incoherent viewpoints that erode a user’s trust in the moral integrity of the system. Moral deviations are difficult to mitigate because moral judgments are not universal, and there may be multiple competing judgments that apply to a situation simultaneously. In this work, we introduce a new resource, not to authoritatively… 

Figures and Tables from this paper

ProsocialDialog: A Prosocial Backbone for Conversational Agents
TLDR
This work introduces P ROSOCIAL D IALOG, the first large-scale multi-turn dialogue dataset to teach conversational agents to respond to problematic content following social norms, and introduces a dialogue safety detection module, Canary, capable of generating RoTs given conversational context, and a socially-informed dialogue agent, Prost.
Does Moral Code Have a Moral Code? Probing Delphi's Moral Philosophy
In an effort to guarantee that machine learning model outputs conform with human moral values, recent work has begun exploring the pos-sibility of explicitly training models to learn the difference
Target-Guided Dialogue Response Generation Using Commonsense and Data Augmentation
TLDR
A new technique for target-guided response generation is introduced, which first uses a bridging path of commonsense knowledge concepts between the source and the target, and then uses the identified bridges path to generate transition responses.

References

SHOWING 1-10 OF 102 REFERENCES
Delphi: Towards Machine Ethics and Norms
TLDR
The first major attempt to computationally explore the vast space of moral implications in real-world settings is conducted, with Delphi, a unified model of descriptive ethics empowered by diverse data of people’s moral judgment from COMMONSENSE NORM BANK.
Scruples: A Corpus of Community Ethical Judgments on 32, 000 Real-Life Anecdotes
TLDR
This work introduces Scruples, the first large-scale dataset with 625,000 ethical judgments over 32,000 real-life anecdotes, and presents a new method to estimate the best possible performance on such tasks with inherently diverse label distributions, and explores likelihood functions that separate intrinsic from model uncertainty.
Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences
TLDR
Moral Stories, a crowd-sourced dataset of structured, branching narratives for the study of grounded, goal-oriented social reasoning, is introduced and decoding strategies that combine multiple expert models to significantly improve the quality of generated actions, consequences, and norms compared to strong baselines are proposed.
Aligning AI With Shared Human Values
TLDR
With the ETHICS dataset, it is found that current language models have a promising but incomplete understanding of basic ethical knowledge, and it provides a steppingstone toward AI that is aligned with human values.
When Morality Opposes Justice: Conservatives Have Moral Intuitions that Liberals may not Recognize
Researchers in moral psychology and social justice have agreed that morality is about matters of harm, rights, and justice. On this definition of morality, conservative opposition to social justice
Social Bias Frames: Reasoning about Social and Power Implications of Language
TLDR
It is found that while state-of-the-art neural models are effective at high-level categorization of whether a given statement projects unwanted social bias, they are not effective at spelling out more detailed explanations in terms of Social Bias Frames.
A Word on Machine Ethics: A Response to Jiang et al. (2021)
TLDR
This work focuses on a single case study of the recently proposed Delphi model and offers a critique of the project’s proposed method of automating morality judgments, and concludes with a discussion of how machine ethics could usefully proceed, by focusing on current and near-future uses of technology, in a way that centers around transparency, democratic values, and allows for straightforward accountability.
Social Chemistry 101: Learning to Reason about Social and Moral Norms
TLDR
A new conceptual formalism to study people's everyday social norms and moral judgments over a rich spectrum of real life situations described in natural language and a model framework, Neural Norm Transformer, learns and generalizes Social-Chem-101 to successfully reason about previously unseen situations, generating relevant (and potentially novel) attribute-aware social rules-of-thumb.
Enhancing the Measurement of Social Effects by Capturing Morality
TLDR
This work empirically evaluates the usage of a morality lexicon that was expanded via a quality controlled, human in the loop process and finds that the enhancement of the original lexicon led to measurable improvements in prediction accuracy for the selected NLP tasks.
Language Models have a Moral Dimension
TLDR
Being able to rate the (non-)normativity of arbitrary phrases without explicitly training the LM for this task, the capabilities of the moral direction for guiding LMs towards producing normative text are demonstrated and demonstrated on RealToxicityPrompts testbed, preventing the neural toxic degeneration in GPT-2.
...
...