Why Fair Labels Can Yield Unfair Predictions: Graphical Conditions for Introduced Unfairness
@article{Ashurst2022WhyFL, title={Why Fair Labels Can Yield Unfair Predictions: Graphical Conditions for Introduced Unfairness}, author={Carolyn Ashurst and Ryan Carey and Silvia Chiappa and Tom Everitt}, journal={ArXiv}, year={2022}, volume={abs/2202.10816} }
In addition to reproducing discriminatory relationships in the training data, machine learning systems can also introduce or amplify discriminatory effects. We refer to this as introduced unfairness, and investigate the conditions under which it may arise. To this end, we propose introduced total variation as a measure of introduced unfairness, and establish graphical conditions under which it may be incentivised to occur. These criteria imply that adding the sensitive attribute as a feature…
2 Citations
What-Is and How-To for Fairness in Machine Learning: A Survey, Reflection, and Perspective
- Computer ScienceArXiv
- 2022
The importance of matching the mission and the means of different types of fairness inquiries on the data generating process, on the predicted outcome, and on the induced impact, respectively is demonstrated.
A Complete Criterion for Value of Information in Soluble Influence Diagrams
- MathematicsArXiv
- 2022
Influence diagrams have recently been used to analyse the safety and fairness properties of AI systems. A key building block for this analysis is a graphical criterion for value of information (VoI).…
References
SHOWING 1-10 OF 47 REFERENCES
Path-Specific Counterfactual Fairness
- Computer ScienceAAAI
- 2019
This work introduces a causal approach to disregard effects along unfair pathways that simplifies and generalizes previous literature, and corrects observations adversely affected by the sensitive attribute, and uses these to form a decision.
Learning Optimal Fair Policies
- Computer ScienceICML
- 2019
This paper uses methods from causal inference and constrained optimization to learn optimal policies in a way that addresses multiple potential biases which afflict data analysis in sensitive contexts, extending the approach of Nabi & Shpitser (2018).
Counterfactual Fairness
- Computer ScienceNIPS
- 2017
This paper develops a framework for modeling fairness using tools from causal inference and demonstrates the framework on a real-world problem of fair prediction of success in law school.
Avoiding Discrimination through Causal Reasoning
- Computer ScienceNIPS
- 2017
This work crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion and put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.
A Causal Bayesian Networks Viewpoint on Fairness
- Computer SciencePrivacy and Identity Management
- 2018
It is shown that causal Bayesian networks provide us with a powerful tool to measure unfairness in a dataset and to design fair models in complex unfairness scenarios.
Fairness in Machine Learning
- Computer ScienceINNSBDDL
- 2019
It is shown how causal Bayesian networks can play an important role to reason about and deal with fairness, especially in complex unfairness scenarios, and how optimal transport theory can be leveraged to develop methods that impose constraints on the full shapes of distributions corresponding to different sensitive attributes.
Fair Data Adaptation with Quantile Preservation
- Computer ScienceArXiv
- 2019
It is shown that certain population notions of fairness are still guaranteed even if the counterfactual model is misspecified, and a practical data adaption method based on quantile preservation in causal structural equation models is presented.
When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness
- Computer ScienceNIPS
- 2017
This paper shows how it is possible to make predictions that are approximately fair with respect to multiple possible causal models at once, thus mitigating the problem of exact causal specification.
Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
It is shown that trained models significantly amplify the association of target labels with gender beyond what one would expect from biased datasets, and an adversarial approach is adopted to remove unwanted features corresponding to protected variables from intermediate representations in a deep neural network.
Fair Inference on Outcomes
- MathematicsAAAI
- 2018
It is argued that the existence of discrimination can be formalized in a sensible way as the presence of an effect of a sensitive covariate on the outcome along certain causal pathways, a view which generalizes (Pearl 2009).