Assessing algorithmic fairness with unobserved protected class using data combination

@article{Kallus2020AssessingAF,
  title={Assessing algorithmic fairness with unobserved protected class using data combination},
  author={Nathan Kallus and Xiaojie Mao and Angela Zhou},
  journal={Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency},
  year={2020}
}
The increasing impact of algorithmic decisions on people's lives compels us to scrutinize their fairness and, in particular, the disparate impacts that ostensibly-color-blind algorithms can have on different groups. Examples include credit decisioning, hiring, advertising, criminal justice, personalized medicine, and targeted policymaking, where in some cases legislative or regulatory frameworks for fairness exist and define specific protected classes. In this paper we study a fundamental… Expand
Measuring Fairness under Unawareness via Quantification
TLDR
This work tackles the problem of measuring group fairness under unawareness of sensitive attributes, by using techniques from quantification, a supervised learning task concerned with directly providing group-level prevalence estimates (rather than individual-level class labels). Expand
Improving Fairness and Privacy in Selection Problems
TLDR
This work studies the possibility of using a differentially private exponential mechanism as a post-processing step to improve both fairness and privacy of supervised learning models and shows that the exponential mechanisms can make the decision-making process perfectly fair. Expand
A Statistical Test for Probabilistic Fairness
TLDR
This paper develops a rigorous hypothesis testing mechanism for assessing the probabilistic fairness of any pre-trained logistic classifier, and shows both theoretically as well as empirically that the proposed test is asymptotically correct. Expand
Assessing Disparate Impacts of Personalized Interventions: Identifiability and Bounds
TLDR
It is proved how to nonetheless point-identify quantities under the additional assumption of monotone treatment response, which may be reasonable in many applications and provided a sensitivity analysis for this assumption by means of sharp partial-identification bounds under violations of monotonicity of varying strengths. Expand
Fairness, Equality, and Power in Algorithmic Decision-Making
TLDR
This work argues that leading notions of fairness suffer from three key limitations: they legitimize inequalities justified by "merit;" they are narrowly bracketed, considering only differences of treatment within the algorithm; and they consider between-group and not within-group differences. Expand
Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
TLDR
This work studies using a proxy for the covariate variable and presents a theoretical analysis that aims to characterize weaker conditions under which accurate fairness evaluation is possible and expands the understanding of scenarios where measuring model fairness via proxies can be an effective approach. Expand
Algorithmic Fairness
TLDR
An overview of the main concepts of identifying, measuring and improving algorithmic fairness when using AI algorithms is presented and the most commonly used fairness-related datasets in this field are described. Expand
Blind Pareto Fairness and Subgroup Robustness
TLDR
The proposed Blind Pareto Fairness (BPF) is a method that leverages no-regret dynamics to recover a fair minimax classifier that reduces worst-case risk of any potential subgroup of sufficient size, and guarantees that the remaining population receives the best possible level of service. Expand
Fairness without Imputation: A Decision Tree Approach for Fair Prediction with Missing Values
TLDR
This paper proposes an integrated approach based on decision trees that does not require a separate process of imputation and learning, and trains a tree with missing incorporated as attribute (MIA), whichdoes not require explicit imputation, and optimize a fairness-regularized objective function. Expand
Anti-Discrimination Insurance Pricing: Regulations, Fairness Criteria, and Models
On the issue of insurance discrimination, a grey area in regulation has resulted from the growing use of big data analytics by insurance companies – direct discrimination is prohibited, but indirectExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 91 REFERENCES
Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved
TLDR
This paper decomposes the biases in estimating outcome disparity via threshold-based imputation into multiple interpretable bias sources, allowing us to explain when over- or underestimation occurs and proposes an alternative weighted estimator that uses soft classification. Expand
The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
TLDR
It is argued that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce, rather than requiring that algorithms satisfy popular mathematical formalizations of fairness. Expand
A comparative study of fairness-enhancing interventions in machine learning
TLDR
It is found that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought. Expand
Certifying and Removing Disparate Impact
TLDR
This work links disparate impact to a measure of classification accuracy that while known, has received relatively little attention and proposes a test for disparate impact based on how well the protected class can be predicted from the other attributes. Expand
Fairness in Criminal Justice Risk Assessments: The State of the Art
Objectives: Discussions of fairness in criminal justice risk assessments typically lack conceptual precision. Rhetoric too often substitutes for careful analysis. In this article, we seek to clarifyExpand
Fairness Constraints: Mechanisms for Fair Classification
TLDR
This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy. Expand
Machine Learning, Health Disparities, and Causal Reasoning
In their current Annals article, Rajkomar and colleagues (1) warn us that the introduction of machine-learned predictive algorithms into medicine might inadvertently reinforce or create inequitableExpand
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
TLDR
It is demonstrated that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups, and how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance. Expand
Inherent Trade-Offs in the Fair Determination of Risk Scores
TLDR
Some of the ways in which key notions of fairness are incompatible with each other are suggested, and hence a framework for thinking about the trade-offs between them is provided. Expand
Ensuring Fairness in Machine Learning to Advance Health Equity
TLDR
The mechanisms by which a model's design, data, and deployment may lead to disparities are described; how different approaches to distributive justice in machine learning can advance health equity are explained; and what contexts are more appropriate for different equity approaches inMachine learning. Expand
...
1
2
3
4
5
...