Human Comprehension of Fairness in Machine Learning

  title={Human Comprehension of Fairness in Machine Learning},
  author={Debjani Saha and Candice Schumann and Duncan C. McElfresh and John P. Dickerson and Michelle L. Mazurek and Michael Carl Tschantz},
  journal={Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society},
Bias in machine learning has manifested injustice in several areas, with notable examples including gender bias in job-related ads [4], racial bias in evaluating names on resumes [3], and racial bias in predicting criminal recidivism [1]. In response, research into algorithmic fairness has grown in both importance and volume over the past few years. Different metrics and approaches to algorithmic fairness have been proposed, many of which are based on prior legal and philosophical concepts [2… 

Figures and Tables from this paper

A Human-in-the-loop Framework to Construct Context-aware Mathematical Notions of Outcome Fairness

This work presents a framework to learn context-aware mathematical formulations of fairness by eliciting people's situated fairness assessments by utilizing human responses to pair-wise questions about decision subjects' circumstance and deservingness, and the harm/benefit imposed on them.

Where Is the Normative Proof? Assumptions and Contradictions in ML Fairness Research

It is shown that, in existing papers published in top venues, once normative assumptions are clarified, it is often possible to get muddled results, and implicit normative assumptions and accompanying normative results contraindicate using these methods in practical fairness applications.

From Reality to World. A Critical Perspective on AI Fairness

This paper provides a new perspective on the debate on AI fairness and shows that criticism of ML unfairness is “realist”, in other words, grounded in an already instituted reality based on demographic categories produced by institutions.

Software Fairness: An Analysis and Survey

A clear view of the state-of-the-art in software fairness analysis is provided including the need to study intersectional/sequential bias, policy-based bias handling and human-in- the-loop, socio-technical bias mitigation.

How good is good enough? Quantifying the impact of benefits, accuracy, and privacy on willingness to adopt COVID-19 decision aids

This work empirically models how accuracy and privacy influence intent to adopt algorithmic systems and empirically develops the first statistical models of how the amount of benefit and degree of privacy risk in a data-driven decision aid may influence willingness to adopt.

Public Opinion Toward Artificial Intelligence

This chapter in the Oxford Handbook of AI Governance synthesizes and discusses research on public opinion toward artificial intelligence (AI). Understanding citizens' and consumers' attitudes toward

Artificial intelligence ethics by design. Evaluating public perception on the importance of ethical design principles of artificial intelligence

First answers on the relative importance of ethical principles given a specific use case—the use of artificial intelligence in tax fraud detection are given.



Counterfactual Fairness

This paper develops a framework for modeling fairness using tools from causal inference and demonstrates the framework on a real-world problem of fair prediction of success in law school.

Fairness in Machine Learning: Lessons from Political Philosophy

This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.

Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction

This work descriptively survey users for how they perceive and reason about fairness in algorithmic decision making and proposes a framework to understand why people perceive certain features as fair or unfair to be used in algorithms.

A Qualitative Exploration of Perceptions of Algorithmic Fairness

While the concept of algorithmic fairness was largely unfamiliar, learning about algorithmic (un)fairness elicited negative feelings that connect to current national discussions about racial injustice and economic inequality.

Fairness through awareness

A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.

Certifying and Removing Disparate Impact

This work links disparate impact to a measure of classification accuracy that while known, has received relatively little attention and proposes a test for disparate impact based on how well the protected class can be predicted from the other attributes.

On Fairness and Calibration

It is shown that calibration is compatible only with a single error constraint, and that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier.

'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions

There may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.

Fair prediction with disparate impact: A study of bias in recidivism prediction instruments

It is demonstrated that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups, and how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.

Big Data's Disparate Impact

Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with.