• Corpus ID: 239024790

A Burden Shared is a Burden Halved: A Fairness-Adjusted Approach to Classification

  title={A Burden Shared is a Burden Halved: A Fairness-Adjusted Approach to Classification},
  author={Bradley Rava and Wenguang Sun and Gareth M. James and Xin Tong},
We study fairness in classification, where one wishes to make automated decisions for people from different protected groups. When individuals are classified, the decision errors can be unfairly concentrated in certain protected groups. We develop a fairness-adjusted selective inference (FASI) framework and data-driven algorithms that achieve statistical parity in the sense that the false selection rate (FSR) is controlled and equalized among protected groups. The FASI algorithm operates by… 

Figures from this paper


The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
It is argued that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce, rather than requiring that algorithms satisfy popular mathematical formalizations of fairness.
Fairness through awareness
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.
Inherent Trade-Offs in the Fair Determination of Risk Scores
Some of the ways in which key notions of fairness are incompatible with each other are suggested, and hence a framework for thinking about the trade-offs between them is provided.
With Malice Towards None: Assessing Uncertainty via Equalized Coverage
This work presents an operational methodology that achieves equitable treatment by offering rigorous distribution-free coverage guarantees holding in finite samples, and test the applicability of the proposed framework on real data, demonstrating that equalized coverage constructs unbiased prediction intervals, unlike competitive methods.
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
It is demonstrated that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups, and how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.
Fairness in Machine Learning
It is shown how causal Bayesian networks can play an important role to reason about and deal with fairness, especially in complex unfairness scenarios, and how optimal transport theory can be leveraged to develop methods that impose constraints on the full shapes of distributions corresponding to different sensitive attributes.
Achieving Equalized Odds by Resampling Sensitive Attributes
We present a flexible framework for learning predictive models that approximately satisfy the equalized odds notion of fairness. This is achieved by introducing a general discrepancy functional that
Equality of Opportunity in Supervised Learning
This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.
False Discovery Rate Control Under General Dependence By Symmetrized Data Aggregation
The proposed SDA filter first constructs a sequence of ranking statistics that fulfill global symmetry properties, and then chooses a data--driven threshold along the ranking to control the FDR, and establishes the asymptotic validity of SDA for both the FDR and false discovery proportion (FDP) control under mild regularity conditions.
Classification with reject option
This paper studies two-class (or binary) classification of elements X in R k that allows for a reject option. Based on n independent copies of the pair of random variables (X,Y ) with X 2 R k and Y 2