• Corpus ID: 238857096

Delphi: Towards Machine Ethics and Norms

@article{Jiang2021DelphiTM,
  title={Delphi: Towards Machine Ethics and Norms},
  author={Liwei Jiang and Jena D. Hwang and Chandrasekhar Bhagavatula and Ronan Le Bras and Maxwell Forbes and Jon Borchardt and Jenny Liang and Oren Etzioni and Maarten Sap and Yejin Choi},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.07574}
}
Failing to account for moral norms could notably hinder AI systems’ ability to interact with people. AI systems empirically require social, cultural, and ethical norms to make moral judgments. However, open-world situations with different groundings may shift moral implications significantly. For example, while “driving my friend to the airport” is “good”, “driving my friend to the airport with a car I stole” is “not okay.” In natural language processing, machine moral reasoning is still in a… 

CAN MACHINES LEARN MORALITY? THE DELPHI EXPERIMENT

The first major attempt to computationally explore the vast space of moral implications in real-world settings is conducted, with Delphi, a unified model of descriptive ethics empowered by diverse data of people’s moral judgment from COMMONSENSE NORM BANK.

The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems

The Moral Integrity Corpus, MIC, is a resource, which captures the moral assumptions of 38k prompt-reply pairs, using 99k distinct Rules of Thumb (RoTs), and is suggested that MIC will be a useful resource for understanding and language models’ implicit moral assumptions and flexibly benchmarking the integrity of conversational agents.

On the Machine Learning of Ethical Judgments from Natural Language

Through an audit of recent work on computational approaches for predicting morality, this work examines the broader issues that arise from such efforts and offers a critique of such NLP methods for automating ethical decision-making.

Does Moral Code have a Moral Code? Probing Delphi’s Moral Philosophy

In an effort to guarantee that machine learning model outputs conform with human moral values, recent work has begun exploring the possibility of explicitly training models to learn the difference

A Word on Machine Ethics: A Response to Jiang et al. (2021)

This work focuses on a single case study of the recently proposed Delphi model and offers a critique of the project’s proposed method of automating morality judgments, and concludes with a discussion of how machine ethics could usefully proceed, by focusing on current and near-future uses of technology, in a way that centers around transparency, democratic values, and allows for straightforward accountability.

Automated Kantian Ethics: A Faithful Implementation

As we grant artificial intelligence increasing power and independence in contexts like healthcare, policing, and driving, AI faces moral dilemmas but lacks the tools to solve them. Warnings from

Rise of the Bioethics AI: Curse or Blessing?

In October 2021, the Allen Institute for Artificial Intelligence publicly released Delphi, an artificial intelligence system (AI) trained to make general moral decisions (Allen Institute for

When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment

This paper presents a novel challenge set consisting of rule-breaking question answering (RBQA) of cases that involve potentially permissible rule- Breaking – inspired by recent moral psychology studies and proposes a novel moral chain of thought prompting strategy that combines the strengths of LLMs with theories of moral reasoning developed in cognitive science to predict human moral judgments.

Mapping Topics in 100, 000 Real-life Moral Dilemmas

Moral dilemmas play an important role in theorizing both about ethical norms and moral psychology. Yet thought experiments borrowed from the philosophical literature often lack the nuances and

Automated Kantian Ethics

As we grant artificial intelligence increasing power and independence in contexts like healthcare, policing, and driving, AI faces moral dilemmas but lacks the tools to solve them. The dangers of