• Corpus ID: 227247585

"A cold, technical decision-maker": Can AI provide explainability, negotiability, and humanity?

  title={"A cold, technical decision-maker": Can AI provide explainability, negotiability, and humanity?},
  author={Allison Woodruff and Yasmin Asare Anderson and Katherine Jameson Armstrong and Marina Gkiza and Jay Jennings and Christopher Moessner and Fernanda B. Vi{\'e}gas and Martin Wattenberg and Lynette Webb and Fabian Wrede and Patrick Gage Kelley},
Algorithmic systems are increasingly deployed to make decisions in many areas of people's lives. The shift from human to algorithmic decision-making has been accompanied by concern about potentially opaque decisions that are not aligned with social values, as well as proposed remedies such as explainability. We present results of a qualitative study of algorithmic decision-making, comprised of five workshops conducted with a total of 60 participants in Finland, Germany, the United Kingdom, and… 

Tables from this paper

What is the Bureaucratic Counterfactual? Categorical versus Algorithmic Prioritization in U.S. Social Policy

There is growing concern about governments’ use of algorithms to make high-stakes decisions. While an early wave of research focused on algorithms that predict risk to allocate punishment and

”Because AI is 100% right and safe”: User Attitudes and Sources of AI Authority in India

The perceptions around AI systems in India are investigated by drawing upon 32 interviews and 459 survey respondents in India to find a case of AI authority—AI has a legitimized power to influence human actions, without requiring adequate evidence about the capabilities of the system.

XAI for learning: Narrowing down the digital divide between “new” and “old” experts

A strategy for how XAI interface design could be tailored to have a long-lasting educational value and an intermitted explainability approach that could help to find a balance between seamless and cognitively engaging explanations are outlined.



Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR

It is suggested data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims, which describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system.

The Principles and Limits of Algorithm-in-the-Loop Decision Making

The results highlight the urgent need to expand the analyses of algorithmic decision making aids beyond evaluating the models themselves to investigating the full sociotechnical contexts in which people and algorithms interact.

Explanations as Mechanisms for Supporting Algorithmic Transparency

An online experiment focusing on how different ways of explaining Facebook's News Feed algorithm might affect participants' beliefs and judgments about the News Feed found that all explanations caused participants to become more aware of how the system works, and helped them to determine whether the system is biased and if they can control what they see.

Binary Governance: Lessons from the GDPR's Approach to Algorithmic Accountability

  • M. Kaminski
  • Political Science
    SSRN Electronic Journal
  • 2019
Algorithms are now used to make significant decisions about individuals, from credit determinations to hiring and firing. But they are largely unregulated under U.S. law. A quickly growing literature

Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation

The problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless.

Model-Agnostic Counterfactual Explanations for Consequential Decisions

This work builds on standard theory and tools from formal verification and proposes a novel algorithm that solves a sequence of satisfiability problems, where both the distance function (objective) and predictive model (constraints) are represented as logic formulae.

Algorithmic Authority: the Ethics, Politics, and Economics of Algorithms that Interpret, Decide, and Manage

This panel will bring together researchers of quantified self, healthcare, digital labor, social media, and the sharing economy to deepen the emerging discourses on the ethics, politics, and economics of algorithmic authority in multiple domains.

The Intuitive Appeal of Explainable Machines

It is shown that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.

"Scary Robots": Examining Public Responses to AI

Overall results showed that the most common visions of the impact of AI elicit significant anxiety, and Negotiating the deployment of AI will require contending with these anxieties.

Shaping Our Tools: Contestability as a Means to Promote Responsible Algorithmic Decision Making in the Professions

It is argued that approaches focused on contestability better promote professionals’ continued, active engagement with algorithmic systems than current frameworks.