• Corpus ID: 218869671

The price for fairness in a regression framework

@article{LeGouic2020ThePF,
  title={The price for fairness in a regression framework},
  author={Thibaut Le Gouic and Jean-Michel Loubes},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.11720}
}
We consider the problem of achieving fairness in a regression framework. Fairness is here expressed as demographic parity. We provide a control over the loss of the generalization error when fairness constraint is imposed, hence computing the cost for fairness for a regressor. Then, using optimal transport theory, we provide a way to construct a fair regressor which is optimal since it achieves the optimal generalization bound. This regressor is obtained by a post-processing methodology. 

Figures from this paper

Fair Regression under Sample Selection Bias

This paper develops a framework for fair regression under sample selection bias when dependent variable values of a set of samples from the training data are missing as a result of another hidden process and uses the classic Heckman model for bias correction and the Lagrange duality to achieve fairness in regression based on a variety of fairness notions.

Fairness with Continuous Optimal Transport

A stochastic-gradient fairness method based on a dual formulation of continuous OT that gives superior performance to discrete OT methods when little data is available to solve the OT problem, and similar performance otherwise, and is able to continually adjust the model parameters to adapt to changes in level of unfairness.

Gradient descent algorithms for Bures-Wasserstein barycenters

A framework to derive global rates of convergence for both gradient descent and stochastic gradient descent despite the fact that the barycenter functional is not geodesically convex is developed by employing a Polyak-Lojasiewicz (PL) inequality.

References

SHOWING 1-10 OF 42 REFERENCES

A continuous framework for fairness

The Continuous Fairness Algorithm (CFA) is proposed which enables a continuous interpolation between different fairness definitions, and uses optimal transport theory, and specifically the concept of the barycenter, to maximize decision maker utility under the chosen fairness constraints.

Obtaining Fairness using Optimal Transport Theory

The goals of this paper are to detect when a binary classification rule lacks fairness and to try to fight against the potential discrimination attributable to it by modifying either the classifiers or the data itself.

Fair regression via plug-in estimator and recalibration with statistical guarantees

This work studies the problem of learning an optimal regression function subject to a fairness constraint by leveraging on a proxy-discretized version, for which an explicit expression of the optimal fair predictor is derived.

Fairness Constraints: Mechanisms for Fair Classification

This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.

Fair Regression: Quantitative Definitions and Reduction-based Algorithms

This paper studies the prediction of a real-valued target, such as a risk score or recidivism rate, while guaranteeing a quantitative notion of fairness with respect to a protected attribute such as gender or race, and proposes general schemes for fair regression under two notions of fairness.

Fairness risk measures

A new definition of fairness is proposed that generalises some existing proposals, while allowing for generic sensitive features and resulting in a convex objective and shows how this relates to the rich literature on risk measures from mathematical finance.

A comparative study of fairness-enhancing interventions in machine learning

It is found that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.

Fairness in Machine Learning

It is shown how causal Bayesian networks can play an important role to reason about and deal with fairness, especially in complex unfairness scenarios, and how optimal transport theory can be leveraged to develop methods that impose constraints on the full shapes of distributions corresponding to different sensitive attributes.

A central limit theorem for Lp transportation cost on the real line with application to fairness assessment in machine learning

A consistent estimate of the asymptotic variance is provided, which enables to build two sample tests and confidence intervals to certify the similarity between two distributions, and is used to assess a new criterion of data set fairness in classification.

FlipTest: fairness testing via optimal transport

Evaluating the approach on three case studies, it is shown that this provides a computationally inexpensive way to identify subgroups that may be harmed by model discrimination, including in cases where the model satisfies group fairness criteria.