• Corpus ID: 214727927

Fairness Evaluation in Presence of Biased Noisy Labels

@article{Fogliato2020FairnessEI,
  title={Fairness Evaluation in Presence of Biased Noisy Labels},
  author={Riccardo Fogliato and Max G'sell and Alexandra Chouldechova},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.13808}
}
Risk assessment tools are widely used around the country to inform decision making within the criminal justice system. Recently, considerable attention has been devoted to the question of whether such tools may suffer from racial bias. In this type of assessment, a fundamental issue is that the training and evaluation of the model is based on a variable (arrest) that may represent a noisy version of an unobserved outcome of more central interest (offense). We propose a sensitivity analysis… 

Figures and Tables from this paper

The Impact of Algorithmic Risk Assessments on Human Predictions and its Analysis via Crowdsourcing Studies
TLDR
A vignette study in which laypersons are tasked with predicting future re-arrests, highlighting the influence of several crucial but often overlooked design decisions and concerns around generalizability when constructing crowdsourcing studies to analyze the impacts of RAI.
Measuring Fairness under Unawareness of Sensitive Attributes: A Quantification-Based Approach
TLDR
This work tackles the problem of measuring group fairness under unawareness of sensitive attributes, by using techniques from quantification, a supervised learning task concerned with directly providing group-level prevalence estimates (rather than individual-level class labels), and shows that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
On the Validity of Arrest as a Proxy for Offense: Race and the Likelihood of Arrest for Violent Crimes
TLDR
Bias in violent arrest data is investigated by analysing racial disparities in the likelihood of arrest for White and Black violent offenders from 16 US states as recorded in the National Incident Based Reporting System (NIBRS).
Measuring Fairness under Unawareness via Quantification
TLDR
This work tackles the problem of measuring group fairness under unawareness of sensitive attributes, by using techniques from quantification, a supervised learning task concerned with directly providing group-level prevalence estimates (rather than individual-level class labels).
Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
TLDR
This work studies using a proxy for the covariate variable and presents a theoretical analysis that aims to characterize weaker conditions under which accurate fairness evaluation is possible and expands the understanding of scenarios where measuring model fairness via proxies can be an effective approach.
On the Impossibility of Fairness-Aware Learning from Corrupted Data
TLDR
It is proved that there are situations in which an adversary can force any learner to return a biased classifier, with or without degrading accuracy, and that the strength of this bias increases for learning problems with underrepresented protected groups in the data.
Under-reliance or misalignment? How proxy outcomes limit measurement of appropriate reliance in AI-assisted decision-making
As AI-based decision support (ADS) tools are broadly adopted, it is critical to understand how humans can effectively incorporate AI recommendations into their decision-making. However, existing
Fair Classification with Instance-dependent Label Noise
TLDR
This work provides general frameworks for learning fair classifiers with instance-dependent label noise and rewrites the classification risk and the fairness metric in terms of noisy data and thereby build robust classifiers for causality-based fairness notion.
Fairness without Imputation: A Decision Tree Approach for Fair Prediction with Missing Values
TLDR
This paper theoretically analyzed different sources of discrimi- nation risks when training with an imputed dataset and proposed an integrated approach based on decision trees that does not require a separate process of imputation and learning, and optimize a fairness-regularized objective function.
Fairness-Aware Learning from Corrupted Data
TLDR
It is shown that an adversary can force any learner to return a biased classifier, with or without degrading accuracy, and that the strength of this bias increases for learning problems with underrepresented protected groups in the data.
...
1
2
3
...

References

SHOWING 1-10 OF 58 REFERENCES
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
TLDR
It is demonstrated that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups, and how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.
Counterfactual Fairness
TLDR
This paper develops a framework for modeling fairness using tools from causal inference and demonstrates the framework on a real-world problem of fair prediction of success in law school.
Fairness in Criminal Justice Risk Assessments: The State of the Art
Objectives: Discussions of fairness in criminal justice risk assessments typically lack conceptual precision. Rhetoric too often substitutes for careful analysis. In this article, we seek to clarify
Risk, Race, & Recidivism: Predictive Bias and Disparate Impact
One way to unwind mass incarceration without compromising public safety is to use risk assessment instruments in sentencing and corrections. Although these instruments figure prominently in current
An algorithm for removing sensitive information: Application to race-independent recidivism prediction
TLDR
This paper proposes a method to eliminate bias from predictive models by removing all information regarding protected variables from the data to which the models will ultimately be trained, and provides a probabilistic notion of algorithmic bias.
The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
TLDR
It is argued that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce, rather than requiring that algorithms satisfy popular mathematical formalizations of fairness.
Causal Inference with Differential Measurement Error: Nonparametric Identification and Sensitivity Analysis
TLDR
It is shown that an analysis ignoring differential measurement error may considerably overestimate the causal effects, which contrasts with the case of classical measurement error, which always yields attenuation bias.
Avoiding Discrimination through Causal Reasoning
TLDR
This work crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion and put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.
Fair Inference on Outcomes
TLDR
It is argued that the existence of discrimination can be formalized in a sensible way as the presence of an effect of a sensitive covariate on the outcome along certain causal pathways, a view which generalizes (Pearl 2009).
Residual Unfairness in Fair Machine Learning from Prejudiced Data
TLDR
It is proved that, under certain conditions, fairness-adjusted classifiers will in fact induce residual unfairness that perpetuates the same injustices, against the same groups, that biased the data to begin with, thus showing that even state-of-the-art fair machine learning can have a "bias in, bias out" property.
...
1
2
3
4
5
...