Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning

@inproceedings{GrgicHlaca2018BeyondDF,
  title={Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning},
  author={Nina Grgic-Hlaca and Muhammad Bilal Zafar and Krishna P. Gummadi and Adrian Weller},
  booktitle={AAAI Conference on Artificial Intelligence},
  year={2018}
}
With widespread use of machine learning methods in numerous domains involving humans, several studies have raised questions about the potential for unfairness towards certain individuals or groups. A number of recent works have proposed methods to measure and eliminate unfairness from machine learning models. However, most of this work has focused on only one dimension of fair decision making: distributive fairness, i.e., the fairness of the decision outcomes. In this work, we leverage the… 

Figures from this paper

The Role of Accuracy in Algorithmic Process Fairness Across Multiple Domains

It is shown that, in every domain, disagreements in fairness judgements can be largely explained by the assignments of properties to features, and that fairness jud judgement can be well predicted across domains by training the predictor using the property assignments from one domain's data and predicting in another.

LimeOut: An Ensemble Approach To Improve Process Fairness

This paper considers the problem of making classifiers fairer by reducing their dependence on sensitive features while increasing (or, at least, maintaining) their accuracy, and proposes a framework that relies on "feature drop-out" techniques in neural based approaches to tackle process fairness.

Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction

This work descriptively survey users for how they perceive and reason about fairness in algorithmic decision making and proposes a framework to understand why people perceive certain features as fair or unfair to be used in algorithms.

Causal Feature Selection for Algorithmic Fairness

This work proposes an approach to identify a sub-collection of features that ensure fairness of the dataset by performing conditional independence tests between different subsets of features, and theoretically proves the correctness of the proposed algorithm and shows that sublinear conditional independent tests are sufficient to identify these variables.

Fairness in Algorithmic Decision-Making: Applications in Multi-Winner Voting, Machine Learning, and Recommender Systems

This study illustrates a multitude of fairness properties studied in these three streams of literature, discuss their commonalities and interrelationships, synthesize what the authors know so far, and provide a useful perspective for future research.

Crowdsourcing Perceptions of Fair Predictors for Machine Learning

This study recruits 90 crowdworkers to judge the inclusion of various predictors for recidivism and finds that agreement with the majority vote is higher when participants are part of a more diverse group.

Making ML models fairer through explanations: the case of LimeOut

This paper presents different experiments on multiple datasets and several state of the art classifiers, which show that LimeOut's classifiers improve (or at least maintain) not only process fairness but also other fairness metrics such as individual and group fairness, equal opportunity, and demographic parity.

Bridging Machine Learning and Mechanism Design towards Algorithmic Fairness

The position that building fair decision-making systems requires overcoming limitations which, it is argued, are inherent to each field is developed, and an encompassing framework that cohesively bridges the individual frameworks of mechanism design and machine learning is built.

A Review on Fairness in Machine Learning

An overview of the main concepts of identifying, measuring, and improving algorithmic fairness when using ML algorithms, focusing primarily on classification tasks is presented.

Democratizing Algorithmic Fairness

This paper aims to foreground the political dimension of algorithmic fairness and supplement the current discussion with a deliberative approach to algorithmmic fairness based on the accountability for reasonableness framework (AFR).
...

References

SHOWING 1-10 OF 41 REFERENCES

Fairness Constraints: Mechanisms for Fair Classification

This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.

Counterfactual Fairness

This paper develops a framework for modeling fairness using tools from causal inference and demonstrates the framework on a real-world problem of fair prediction of success in law school.

Avoiding Discrimination through Causal Reasoning

This work crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion and put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.

Fairness-Aware Classifier with Prejudice Remover Regularizer

A regularization approach is proposed that is applicable to any prediction algorithm with probabilistic discriminative models and applied to logistic regression and empirically show its effectiveness and efficiency.

Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment

A new notion of unfairness, disparate mistreatment, is introduced, defined in terms of misclassification rates, which is proposed for decision boundary-based classifiers and can be easily incorporated into their formulation as convex-concave constraints.

Learning Fair Representations

We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the

Fairness through awareness

A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.

Inherent Trade-Offs in the Fair Determination of Risk Scores

Some of the ways in which key notions of fairness are incompatible with each other are suggested, and hence a framework for thinking about the trade-offs between them is provided.

Certifying and Removing Disparate Impact

This work links disparate impact to a measure of classification accuracy that while known, has received relatively little attention and proposes a test for disparate impact based on how well the protected class can be predicted from the other attributes.

Discrimination-aware data mining

This approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge, and an empirical assessment of the results on the German credit dataset.