Non-Discriminatory Machine Learning through Convex Fairness Criteria

@article{Goel2018NonDiscriminatoryML,
  title={Non-Discriminatory Machine Learning through Convex Fairness Criteria},
  author={Naman Goel and Mohammad Yaghini and B. Faltings},
  journal={Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society},
  year={2018}
}
We introduce a novel technique to achieve non-discrimination in machine learning without sacrificing convexity and probabilistic interpretation. We also propose a new notion of fairness for machine learning called the weighted proportional fairness and show that our technique satisfies this subjective fairness criterion. 
Fairness for Robust Log Loss Classification
TLDR
A new classifier is derived that incorporates fairness criteria into its worst-case logarithmic loss minimization and takes the form of a minimax game and produces a parametric exponential family conditional distribution that resembles truncated logistic regression. Expand
Statistical Equity: A Fairness Classification Objective
TLDR
A new fairness definition is proposed, motivated by the principle of equity, that considers existing biases in the data and attempts to make equitable decisions that account for these previous historical biases. Expand
Counterfactual fairness: removing direct effects through regularization
TLDR
This work develops regularizations to tackle classical fairness measures and presents a causal regularization that satisfies the new fairness definition by removing the impact of unprivileged group variables on the model outcomes as measured by the CDE. Expand
L G ] 1 1 M ar 2 02 0 Fairness by Explicability and Adversarial SHAP Learning ⋆
The ability to understand and trust the fairness of model predictions, particularly when considering the outcomes of unprivileged groups, is critical to the deployment and adoption of machineExpand
Metrics and methods for a systematic comparison of fairness-aware machine learning algorithms
TLDR
It is found that fairness-unaware algorithms typically fail to produce adequately fair models and that the simplest algorithms are not necessarily the fairest ones, while fairness-aware algorithms can induce fairness without material drops in predictive power. Expand
FairCanary: Rapid Continuous Explainable Fairness
TLDR
This paper presents Quantile Demographic Drift, a new fairness score that is easily interpretable via existing attribution techniques, and also extends naturally to individual fairness via the principle of like-for-like comparison, which can be used to measure intra-group privilege. Expand
Fair Classification with Counterfactual Learning
TLDR
Acounterfactual framework to model fairness-aware learning which benefits from counterfactual reasoning to achieve more fair decision support systems is designed. Expand
Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees
TLDR
A meta-algorithm for classification that can take as input a general class of fairness constraints with respect to multiple non-disjoint and multi-valued sensitive attributes, and which comes with provable guarantees is proposed. Expand
Crowdsourcing with Fairness, Diversity and Budget Constraints
TLDR
This work proposes a novel algorithm which maximizes the expected accuracy of the collected data, while ensuring that the errors satisfy desired notions of fairness, and provides guarantees on the performance of the algorithm. Expand
Fair Classification with Adversarial Perturbations
TLDR
The main contribution is an optimization framework to learn fair classifiers in this adversarial setting that comes with provable guarantees on accuracy and fairness, and it is proved near-tightness of the framework’s guarantees for natural hypothesis classes. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 33 REFERENCES
Rawlsian Fairness for Machine Learning
TLDR
This work studies a technical definition of fairness modeled after John Rawls' notion of "fair equality of opportunity", and gives an algorithm that satisfies this fairness constraint, while still being able to learn at a rate comparable to (but necessarily worse than) that of the best algorithms absent a fairness constraint. Expand
Equality of Opportunity in Supervised Learning
TLDR
This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition. Expand
Fairness in Learning: Classic and Contextual Bandits
TLDR
A tight connection between fairness and the KWIK (Knows What It Knows) learning model is proved: a provably fair algorithm for the linear contextual bandit problem with a polynomial dependence on the dimension, and a worst-case exponential gap in regret between fair and non-fair learning algorithms. Expand
Fairness Constraints: Mechanisms for Fair Classification
TLDR
This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy. Expand
From Parity to Preference-based Notions of Fairness in Classification
TLDR
This paper draws inspiration from the fair-division and envy-freeness literature in economics and game theory and proposes preference-based notions of fairness -- any group of users would collectively prefer its treatment or outcomes, regardless of the (dis)parity as compared to the other groups. Expand
The Price of Fairness
TLDR
The price of fairness is introduced and studied, which is the relative system efficiency loss under a “fair” allocation assuming that a fully efficient allocation is one that maximizes the sum of player utilities. Expand
Fairness-Aware Classifier with Prejudice Remover Regularizer
TLDR
A regularization approach is proposed that is applicable to any prediction algorithm with probabilistic discriminative models and applied to logistic regression and empirically show its effectiveness and efficiency. Expand
Inherent Trade-Offs in the Fair Determination of Risk Scores
TLDR
Some of the ways in which key notions of fairness are incompatible with each other are suggested, and hence a framework for thinking about the trade-offs between them is provided. Expand
Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
TLDR
A new notion of unfairness, disparate mistreatment, is introduced, defined in terms of misclassification rates, which is proposed for decision boundary-based classifiers and can be easily incorporated into their formulation as convex-concave constraints. Expand
Fairness through awareness
TLDR
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented. Expand
...
1
2
3
4
...