• Corpus ID: 47021438


  author={Jon M. Kleinberg and Jens Ludwig and Sendhil Mullainathan and Ashesh Rambachan},
The growing use of algorithms in social and economic life has raised a concern: that they may inadvertently discriminate against certain groups. For example, one recent study found that natural language processing algorithms can embody basic gender biases, such as associating the word nurse more closely with the word she than with the word he (Caliskan, Bryson, and Narayanan 2017). Because the data used to train these algorithms are themselves tinged with stereotypes and past discrimination, it… 

Figures from this paper

Privacy evaluation of fairness-enhancing pre-processing techniques par

This work analyzes recent advances in fairness enhancing pre-processing techniques, evaluates how they control the fairness-utility trade-off and the dataset’s ability to be used successfully in downstream tasks and sees that even though these techniques offer practical guarantees of specific fairness metrics, basic machine learning classifiers are often able to successfully predict the sensitive attribute from the transformed data, effectively enabling discrimination.

Artificial fairness? Trust in algorithmic police decision-making

Objectives Test whether (1) people view a policing decision made by an algorithm as more or less trustworthy than when an officer makes the same decision; (2) people who are presented with a specific

Iterated Algorithmic Bias in the Interactive Machine Learning Process of Information Filtering

It is found that the three different iterated bias modes do affect the models learned by ML algorithms, and Iterated filter bias, which is prominent in personalized user interfaces, can limit humans’ ability to discover relevant data.

‘Channel shift’: Technologically mediated policing and procedural justice

In recent years, police forces in the United Kingdom have introduced various technologies that alter the methods by which they interact with the public. In a parallel development, many forces have

Accountability in AI: From Principles to Industry-specific Accreditation

It is argued that the present ecosystem is unbalanced, with a need for improved transparency via AI explainability and adequate documentation and process formalisation to support internal audit, leading up eventually to external accreditation processes.



Algorithmic Decision Making and the Cost of Fairness

This work reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities, and also to human decision makers carrying out structured decision rules.

Decoupled classifiers for fair and efficient machine learning

This work provides a simple and efficient decoupling technique, that can be added on top of any black-box machine learning algorithm, to learn different classifiers for different groups, and shows that this method can apply to a range of fairness criteria.

Inherent Trade-Offs in the Fair Determination of Risk Scores

Some of the ways in which key notions of fairness are incompatible with each other are suggested, and hence a framework for thinking about the trade-offs between them is provided.

Fair prediction with disparate impact: A study of bias in recidivism prediction instruments

It is demonstrated that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups, and how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.

Prediction Policy Problems.

This work argues an important class of policy problems does not require causal inference but instead requires predictive inference, and that new developments in the field of "machine learning" are particularly useful for addressing these prediction problems.

, Jr . , and Glenn C . Loury . 2013 . “ Valuing Diversity . ”

  • “ Fairness through Awareness . ” Proceedings of the 3 rd Innovations in Theoretical Computer Science Conference
  • 2012

“ Machine Bias

  • 2016