• Corpus ID: 13633339

The Case for Process Fairness in Learning: Feature Selection for Fair Decision Making

@inproceedings{GrgicHlaca2016TheCF,
  title={The Case for Process Fairness in Learning: Feature Selection for Fair Decision Making},
  author={Nina Grgic-Hlaca and Muhammad Bilal Zafar and Krishna P. Gummadi and Adrian Weller},
  year={2016}
}
Machine learning methods are increasingly being used to inform, or sometimes even directly to make, important decisions about humans. A number of recent works have focussed on the fairness of the outcomes of such decisions, particularly on avoiding decisions that affect users of different sensitive groups (e.g., race, gender) disparately. In this paper, we propose to consider the fairness of the process of decision making. Process fairness can be measured by estimating the degree to which… 

Figures and Tables from this paper

The Role of Accuracy in Algorithmic Process Fairness Across Multiple Domains
TLDR
It is shown that, in every domain, disagreements in fairness judgements can be largely explained by the assignments of properties to features, and that fairness jud judgement can be well predicted across domains by training the predictor using the property assignments from one domain's data and predicting in another.
Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction
TLDR
This work descriptively survey users for how they perceive and reason about fairness in algorithmic decision making and proposes a framework to understand why people perceive certain features as fair or unfair to be used in algorithms.
LimeOut: An Ensemble Approach To Improve Process Fairness
TLDR
This paper considers the problem of making classifiers fairer by reducing their dependence on sensitive features while increasing (or, at least, maintaining) their accuracy, and proposes a framework that relies on "feature drop-out" techniques in neural based approaches to tackle process fairness.
Fairness and Machine Fairness
TLDR
This work takes "fairness" in this context to be a placeholder for a variety of normative egalitarian considerations, and explores a few fairness measures to suss out their egalitarian roots and evaluate them, both as formalizations of egalitarian ideas and as assertions of what fairness demands of predictive systems.
Information Theoretic Measures for Fairness-aware Feature Selection
TLDR
This work develops a framework for fairness-aware feature selection which takes into account the correlation among the features and the decision outcome, and is based on information theoretic measures for the accuracy and discriminatory impacts of features.
Making ML models fairer through explanations: the case of LimeOut
TLDR
This paper presents different experiments on multiple datasets and several state of the art classifiers, which show that LimeOut's classifiers improve (or at least maintain) not only process fairness but also other fairness metrics such as individual and group fairness, equal opportunity, and demographic parity.
Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality
TLDR
This work uses the Rubin-Neyman potential outcomes framework for the analysis of cause-effect relationships to robustly estimate FACE and FACT and shows that FACT, being somewhat more nuanced compared to FACE, can yield findings of discrimination that differ from those obtained using FACE.
Survey on Fairness Notions and Related Tensions
TLDR
The commonly used fairness notions are surveyed and the tensions that exist among them and with privacy and accuracy are discussed and the relationship between fairness measures and accuracy on real-world scenarios is illustrated.
FairSight: Visual Analytics for Fairness in Decision Making
  • Yongsu Ahn, Y. Lin
  • Computer Science
    IEEE Transactions on Visualization and Computer Graphics
  • 2020
TLDR
FairSight, a visual analytic system, is proposed to achieve different notions of fairness in ranking decisions through identifying the required actions – understanding, measuring, diagnosing and mitigating biases – that together lead to fairer decision making.
Towards a Fairness-Aware Scoring System for Algorithmic Decision-Making
TLDR
The approach is first to develop a social welfare function that incorporates both efficiency and equity, then to translate the social welfare maximization problem in economics into the empirical risk minimization task in the machine learning community to derive a fairness-aware scoring system with the help of mixed integer programming.
...
...

References

SHOWING 1-10 OF 22 REFERENCES
Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
TLDR
A new notion of unfairness, disparate mistreatment, is introduced, defined in terms of misclassification rates, which is proposed for decision boundary-based classifiers and can be easily incorporated into their formulation as convex-concave constraints.
Certifying and Removing Disparate Impact
TLDR
This work links disparate impact to a measure of classification accuracy that while known, has received relatively little attention and proposes a test for disparate impact based on how well the protected class can be predicted from the other attributes.
Inherent Trade-Offs in the Fair Determination of Risk Scores
TLDR
Some of the ways in which key notions of fairness are incompatible with each other are suggested, and hence a framework for thinking about the trade-offs between them is provided.
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
TLDR
It is demonstrated that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups, and how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.
Learning Fair Classifiers
TLDR
This paper introduces a flexible mechanism to design fair classifiers in a principled manner and instantiates this mechanism on three well-known classifiers -- logistic regression, hinge loss and linear and nonlinear support vector machines.
Learning Fair Representations
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the
Classification with No Discrimination by Preferential Sampling
TLDR
A new solution to the CND problem is proposed by introducing a sampling scheme for making the data discrimination free instead of relabeling the dataset, and this new method is not only less intrusive as compared to the "massaging" approach but also outperforms the “reweighing” approach.
False Positives, False Negatives, and False Analyses: A Rejoinder to "Machine Bias: There's Software Used across the Country to Predict Future Criminals. and It's Biased against Blacks"
PROPUBLICA RECENTLY RELEASED a much-heralded investigative report claim­ ing that a risk assessment tool (known as the COMPAS) used in criminal justice is biased against black defendants.12 The
k-NN as an implementation of situation testing for discrimination discovery and prevention
TLDR
This paper tackles the problems of discrimination discovery and prevention from a dataset of historical decisions by adopting a variant of k-NN classification, which overcomes legal weaknesses and technical limitations of existing proposals.
Being Good A Short Introduction To Ethics
TLDR
The being good a short introduction to ethics is universally compatible with any devices to read and is available in the book collection an online access to it is set as public so you can download it instantly.
...
...