• Corpus ID: 3315224

Fairness in Machine Learning: Lessons from Political Philosophy

  title={Fairness in Machine Learning: Lessons from Political Philosophy},
  author={Reuben Binns},
  journal={Decision-Making in Computational Design \& Technology eJournal},
  • R. Binns
  • Published 8 December 2017
  • Philosophy
  • Decision-Making in Computational Design & Technology eJournal
What does it mean for a machine learning model to be `fair', in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different… 

Measuring justice in machine learning

This paper draws on examples from fair machine learning to suggest that the answer to this question is no: the capability theorists' arguments against Rawls's theory carry over into machine learning systems.

On Consequentialism and Fairness

This paper provides a consequentialist critique of common definitions of fairness within machine learning, as well as a machine learning perspective on consequentialism, which brings to the fore some of the tradeoffs involved.

Measuring Justice in Machine Learning

How can we build more just machine learning systems? To answer this question, we need to know both what justice is and how to tell whether one system is more or less just than another. That is, we

What Is Fairness? Implications For FairML

It is derived that fairness problems can already arise without the presence of protected attributes, and it is shown that fairness and predictive performance are not irreconcilable counterparts, but rather that the latter is necessary to achieve the former.

Human Comprehension of Fairness in Machine Learning

An online survey is developed to address non-expert comprehension and perceptions of one popular definition of ML fairness, demographic parity, and to investigate public perception of bias and (un)fairness in algorithmic decision-making.

The invisible power of fairness. How machine learning shapes democracy

An overview of the most widespread definitions of fairness in the field of machine learning is provided, arguing that the ideas highlighting each formalization are closely related to different ideas of justice and to different interpretations of democracy embedded in the authors' culture.

Algorithmic Fairness in Applied Machine Learning Contexts

Machine learning to shape, for example, consumer loan approval or rates, job recommendations, text translations, credit decisions, and justice decisions might all impel different conceptions of machine learning fairness, so machine learning in each of these areas might think about fairness differently.

Trimming the Thorns of AI Fairness Research

Impediments to getting fairness and ethical concerns applied in real applications, whether they are abstruse philosophical debates or technical overhead such as the introduction of ever more hyper-parameters, should be avoided.

On the apparent conflict between individual and group fairness

This paper draws on discussions from within the fair machine learning research and from political and legal philosophy to argue that individual and group fairness are not fundamentally in conflict, and outlines accounts of egalitarian fairness which encompass plausible motivations for both group and individual fairness.

Algorithmic Fairness from a Non-ideal Perspective

A connection is demonstrated between the recent literature on fair machine learning and the ideal approach in political philosophy, and it is shown that some recently uncovered shortcomings in proposed algorithms reflect broader troubles faced by the Ideal approach.



Rawlsian Fairness for Machine Learning

This work studies a technical definition of fairness modeled after John Rawls' notion of "fair equality of opportunity", and gives an algorithm that satisfies this fairness constraint, while still being able to learn at a rate comparable to (but necessarily worse than) that of the best algorithms absent a fairness constraint.

Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment

A new notion of unfairness, disparate mistreatment, is introduced, defined in terms of misclassification rates, which is proposed for decision boundary-based classifiers and can be easily incorporated into their formulation as convex-concave constraints.

Racial Profiling and the Political Philosophy of Race

Philosophical reflection on racial profiling tends to take one of two forms. The first sees it as an example of ‘statistical discrimination,’ (SD), raising the question of when, if ever,

On the Currency of Egalitarian Justice

In his Tanner Lecture of 1979 called “Equality of What?” Amartya Sen asked what metric egalitarians should use to establish the extent to which their ideal is realized in a given society. What

Fairness through awareness

A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.

What's so Bad about Discrimination?

The article argues that discrimination is bad as such when and because it undermines equality of opportunity. It shows, first, that other accounts, such as those concerning intent, efficiency, false

Inherent Trade-Offs in the Fair Determination of Risk Scores

Some of the ways in which key notions of fairness are incompatible with each other are suggested, and hence a framework for thinking about the trade-offs between them is provided.

A Theory of Justice

  • J. Rawls
  • Law
    Princeton Readings in Political Thought
  • 1971
John Rawls is Professor Emeritus at Harvard University. He is the author of the well-known and path breaking A Theory of Justice (Harvard, 1971) and the more recent work Political Liberalism

Big Data's Disparate Impact

Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with.

Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation

This paper provides some exploratory methods by which the normative biases of algorithmic content moderation systems can be measured, by way of a case study using an existing dataset of comments labelled for offence.