Scruples: A Corpus of Community Ethical Judgments on 32, 000 Real-Life Anecdotes

  title={Scruples: A Corpus of Community Ethical Judgments on 32, 000 Real-Life Anecdotes},
  author={Nicholas Lourie and Ronan Le Bras and Yejin Choi},
As AI systems become an increasing part of people's everyday lives, it becomes ever more important that they understand people's ethical norms. Motivated by descriptive ethics, a field of study that focuses on people's descriptive judgments rather than theoretical prescriptions on morality, we investigate a novel, data-driven approach to machine ethics. We introduce SCRUPLES, the first large-scale dataset with 625,000 ethical judgments over 32,000 real-life anecdotes. Each anecdote recounts a… 

Social Chemistry 101: Learning to Reason about Social and Moral Norms

A new conceptual formalism to study people's everyday social norms and moral judgments over a rich spectrum of real life situations described in natural language and a model framework, Neural Norm Transformer, learns and generalizes Social-Chem-101 to successfully reason about previously unseen situations, generating relevant (and potentially novel) attribute-aware social rules-of-thumb.

Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences

Moral Stories, a crowd-sourced dataset of structured, branching narratives for the study of grounded, goal-oriented social reasoning, is introduced and decoding strategies that combine multiple expert models to significantly improve the quality of generated actions, consequences, and norms compared to strong baselines are proposed.

Training Value-Aligned Reinforcement Learning Agents Using a Normative Prior

An approach to valuealigned reinforcement learning is introduced, in which an agent is trained with two reward signals: a standard task performance reward, plus a normative behavior reward, derived from a value-aligned prior model previously shown to classify text as normative or non-normative.

On the Machine Learning of Ethical Judgments from Natural Language

Through an audit of recent work on computational approaches for predicting morality, this work examines the broader issues that arise from such efforts and offers a critique of such NLP methods for automating ethical decision-making.

A Word on Machine Ethics: A Response to Jiang et al. (2021)

This work focuses on a single case study of the recently proposed Delphi model and offers a critique of the project’s proposed method of automating morality judgments, and concludes with a discussion of how machine ethics could usefully proceed, by focusing on current and near-future uses of technology, in a way that centers around transparency, democratic values, and allows for straightforward accountability.

Assessing Cognitive Linguistic Influences in the Assignment of Blame

There are statistically significant differences in uses of first-person passive voice, as well as first- person agents and patients, between descriptions of situations that receive different blame judgments, and these features also aid performance in the task of predicting the eventual collective verdicts.

Delphi: Towards Machine Ethics and Norms

The first major attempt to computationally explore the vast space of moral implications in real-world settings is conducted, with Delphi, a unified model of descriptive ethics empowered by diverse data of people’s moral judgment from COMMONSENSE NORM BANK.

Does Moral Code have a Moral Code? Probing Delphi’s Moral Philosophy

In an effort to guarantee that machine learning model outputs conform with human moral values, recent work has begun exploring the possibility of explicitly training models to learn the difference

The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems

The Moral Integrity Corpus, MIC, is a resource, which captures the moral assumptions of 38k prompt-reply pairs, using 99k distinct Rules of Thumb (RoTs), and is suggested that MIC will be a useful resource for understanding and language models’ implicit moral assumptions and flexibly benchmarking the integrity of conversational agents.

ValueNet: A New Dataset for Human Value Driven Dialogue System

This work presents a new large-scale human value dataset called ValueNet, which contains human attitudes on 21,374 text scenarios and is the first one trying to incorporate a value model into emotionally intelligent dialogue systems.



BERT has a Moral Compass: Improvements of ethical and moral values of machines

It is argued that through an advanced semantic representation of text, BERT allows one to get better insights of moral and ethical values implicitly represented in text, which enables the Moral Choice Machine (MCM) to extract more accurate imprints of moral choices andethical values.

Semantics Derived Automatically from Language Corpora Contain Human-like Moral Choices

It is shown that applying machine learning to human texts can extract deontological ethical reasoning about "right" and "wrong" conduct, and indicates that text corpora contain recoverable and accurate imprints of the authors' social, ethical and even moral choices.

Intuitive ethics: how innately prepared intuitions generate culturally variable virtues

maps embellished with fantastical beasts, sixteenth-century wonder chambers 1⁄2lled with natural and technological marvels, even late-twentieth-century supermarket tabloids–all attest to the human

Building Ethics into Artificial Intelligence

This paper complements existing surveys on the psychological, social and legal discussions of the topic, with an analysis of recent advances in technical solutions for AI governance, and proposes a taxonomy which divides the field into four areas: 1) exploring ethical dilemmas; 2) individual ethical decision frameworks; 3) collective ethical decision framework; 4) ethics in human-AI interactions.

Machine Ethics: Creating an Ethical Intelligent Agent

The importance of machine ethics, the need for machines that represent ethical principles explicitly, and the challenges facing those working on machine ethics are discussed.

Reinforcement Learning as a Framework for Ethical Decision Making

This work argues that the reinforcement-learning framework achieves the appropriate generality required to theorize about an idealized ethical artificial agent, and offers the proper foundations for grounding specific questions about ethical learning and decision making that can promote further scientific investigation.

Moral preferences

This work discusses how to exploit and adapt current preference formalisms in order to model morality and ethics theories, as well as the dynamic integration of moral code into personal preferences.

Moral Foundations Twitter Corpus: A Collection of 35k Tweets Annotated for Moral Sentiment

The Moral Foundations Twitter Corpus is introduced, a collection of 35,108 tweets that have been curated from seven distinct domains of discourse and hand annotated by at least three trained annotators for 10 categories of moral sentiment.

Building Ethically Bounded AI

The notion of ethically-bounded AI is defined and motivated, two concrete examples are described, and some outstanding challenges are outlined.