• Corpus ID: 231986177

Everything is Relative: Understanding Fairness with Optimal Transport

@article{KwegyirAggrey2021EverythingIR,
  title={Everything is Relative: Understanding Fairness with Optimal Transport},
  author={Kweku Kwegyir-Aggrey and Rebecca Santorella and Sarah M. Brown},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.10349}
}
To study discrimination in automated decision-making systems, scholars have proposed several definitions of fairness, each expressing a different fair ideal. These definitions require practitioners to make complex decisions regarding which notion to employ and are often difficult to use in practice since they make a binary judgement a system is fair or unfair instead of explaining the structure of the detected unfairness. We present an optimal transport-based approach to fairness that offers an… 
1 Citations

Figures from this paper

How Gender Debiasing Affects Internal Model Representations, and Why It Matters

This work debias a model during downstream fine-tuning, which reduces extrinsic bias, and measures the effect on intrinsic bias, which is operationalized as bias extractability with information-theoretic probing and provides a comprehensive perspective on bias in NLP models.

References

SHOWING 1-10 OF 39 REFERENCES

Obtaining Fairness using Optimal Transport Theory

The goals of this paper are to detect when a binary classification rule lacks fairness and to try to fight against the potential discrimination attributable to it by modifying either the classifiers or the data itself.

Matching code and law: achieving algorithmic fairness with optimal transport

The explicit formalization of the trade-off between individual and group fairness allows this post-processing approach to be tailored to different situational contexts in which one or the other fairness criterion may take precedence.

Fairness Constraints: Mechanisms for Fair Classification

This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.

Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

It is proved that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.

Fairness through awareness

A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.

The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning

It is argued that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce, rather than requiring that algorithms satisfy popular mathematical formalizations of fairness.

FlipTest: fairness testing via optimal transport

Evaluating the approach on three case studies, it is shown that this provides a computationally inexpensive way to identify subgroups that may be harmed by model discrimination, including in cases where the model satisfies group fairness criteria.

A General Approach to Fairness with Optimal Transport

This work uses optimal transport theory to derive target distributions and methods that allow us to achieve fairness with minimal changes to the unfair model, and achieves a Pareto-optimal trade-off between accuracy and fairness.

A survey of algorithmic recourse: definitions, formulations, solutions, and prospects

An extensive literature review is performed, and an overview of the prospective research directions towards which the community may engage is provided, challenging existing assumptions and making explicit connections to other ethical challenges such as security, privacy, and fairness.

Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse

  • A. Hoffmann
  • Sociology
    Information, Communication & Society
  • 2019
ABSTRACT Problems of bias and fairness are central to data justice, as they speak directly to the threat that ‘big data’ and algorithmic decision-making may worsen already existing injustices. In the