Interventional Fairness: Causal Database Repair for Algorithmic Fairness
@article{Salimi2019InterventionalFC, title={Interventional Fairness: Causal Database Repair for Algorithmic Fairness}, author={Babak Salimi and Luke Rodriguez and Bill Howe and Dan Suciu}, journal={Proceedings of the 2019 International Conference on Management of Data}, year={2019} }
Fairness is increasingly recognized as a critical component of machine learning systems. However, it is the underlying data on which these systems are trained that often reflect discrimination, suggesting a database repair problem. Existing treatments of fairness rely on statistical correlations that can be fooled by statistical anomalies, such as Simpson's paradox. Proposals for causality-based definitions of fairness can correctly model some of these situations, but they require specification…
Figures and Tables from this paper
94 Citations
Capuchin: Causal Database Repair for Algorithmic Fairness
- Computer ScienceArXiv
- 2019
This paper formalizes the situation as a database repair problem, proving sufficient conditions for fair classifiers in terms of admissible variables as opposed to a complete causal model and using these conditions as the basis for database repair algorithms that provide provable fairness guarantees about classifiers trained on their training labels.
Database Repair Meets Algorithmic Fairness
- Computer ScienceSIGMOD Rec.
- 2020
This paper formalizes the situation as a database repair problem, proving sufficient conditions for fair classifiers in terms of admissible variables as opposed to a complete causal model and using these conditions as the basis for database repair algorithms that provide provable fairness guarantees about classifiers trained on their training labels.
Survey on Causal-based Machine Learning Fairness Notions
- Computer ScienceArXiv
- 2020
This paper examines an exhaustive list of causal-based fairness notions, in particular their applicability in real-world scenarios and compiles the most relevant identifiability criteria for the problem of fairness from the extensive literature on identifiable theory.
Causal Feature Selection for Algorithmic Fairness
- Computer ScienceSIGMOD Conference
- 2022
This work proposes an approach to identify a sub-collection of features that ensure fairness of the dataset by performing conditional independence tests between different subsets of features, and theoretically proves the correctness of the proposed algorithm and shows that sublinear conditional independent tests are sufficient to identify these variables.
Identifiability of Causal-based Fairness Notions: A State of the Art
- Computer ScienceArXiv
- 2022
This paper is a compilation of the major identifiability results which are of particular relevance for machine learning fairness and is of particular interest to fairness researchers, practitioners, and pol-icy makers who are considering the use of causality-based fairness notions.
Data Management for Causal Algorithmic Fairness
- Computer ScienceIEEE Data Eng. Bull.
- 2019
It is argued that the concept of fairness requires causal reasoning, and existing works and future opportunities for applying data management techniques to causal algorithmic fairness are identified.
Automated Feature Engineering for Algorithmic Fairness
- Computer ScienceProc. VLDB Endow.
- 2021
A novel multi-objective feature selection strategy that leverages feature construction to generate more features that lead to both high accuracy and fairness on three well-known datasets achieves higher accuracy than other fairness-aware approaches while maintaining similar or higher fairness.
On the Fairness of Causal Algorithmic Recourse
- Computer ScienceArXiv
- 2020
Two new fairness criteria at the group and individual level are proposed which are based on a causal framework that explicitly models relationships between input features, thereby allowing to capture downstream effects of recourse actions performed in the physical world.
Interventional Fairness with Indirect Knowledge of Unobserved Protected Attributes
- Computer ScienceEntropy
- 2021
This work considers a feedback-based framework where the protected attribute is unavailable and the flagged samples are indirect knowledge, and proposes an approach that performs conditional independence tests on observed data to identify proxy attributes that are causally dependent on the (unknown) protected attribute.
Interpretable Data-Based Explanations for Fairness Debugging
- Computer ScienceArXiv
- 2021
Gopher is introduced, a system that produces compact, interpretable, and causal explanations for bias or unexpected model behavior by identifying coherent subsets of the training data that are root-causes for this behavior, and the concept of causal responsibility is introduced that quantifies the extent to which intervening on training data by removing or updating subset of it can resolve the bias.
References
SHOWING 1-10 OF 64 REFERENCES
Capuchin: Causal Database Repair for Algorithmic Fairness
- Computer ScienceArXiv
- 2019
This paper formalizes the situation as a database repair problem, proving sufficient conditions for fair classifiers in terms of admissible variables as opposed to a complete causal model and using these conditions as the basis for database repair algorithms that provide provable fairness guarantees about classifiers trained on their training labels.
Fairness in Relational Domains
- Computer ScienceAIES
- 2018
This work uses first-order logic to provide a flexible and expressive language for specifying complex relational patterns of discrimination and extends an existing statistical relational learning framework, probabilistic soft logic (PSL), to incorporate the definition of relational fairness.
Fairness Constraints: Mechanisms for Fair Classification
- Computer ScienceAISTATS
- 2017
This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.
Avoiding Discrimination through Causal Reasoning
- Computer ScienceNIPS
- 2017
This work crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion and put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.
When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness
- Computer ScienceNIPS
- 2017
This paper shows how it is possible to make predictions that are approximately fair with respect to multiple possible causal models at once, thus mitigating the problem of exact causal specification.
Algorithmic Decision Making and the Cost of Fairness
- Computer ScienceKDD
- 2017
This work reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities, and also to human decision makers carrying out structured decision rules.
Fairness-Aware Classifier with Prejudice Remover Regularizer
- Computer ScienceECML/PKDD
- 2012
A regularization approach is proposed that is applicable to any prediction algorithm with probabilistic discriminative models and applied to logistic regression and empirically show its effectiveness and efficiency.
Counterfactual Fairness
- Computer ScienceNIPS
- 2017
This paper develops a framework for modeling fairness using tools from causal inference and demonstrates the framework on a real-world problem of fair prediction of success in law school.
FairTest: Discovering Unwarranted Associations in Data-Driven Applications
- Computer Science2017 IEEE European Symposium on Security and Privacy (EuroS&P)
- 2017
The unwarranted associations (UA) framework is introduced, a principled methodology for the discovery of unfair, discriminatory, or offensive user treatment in data-driven applications and instantiate the UA framework in FairTest, the first comprehensive tool that helps developers check data- driven applications for unfair user treatment.
Fairness through awareness
- Computer ScienceITCS '12
- 2012
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.