Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved

@article{Chen2019FairnessUU,
  title={Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved},
  author={Jiahao Chen and Nathan Kallus and Xiaojie Mao and Geoffry Svacha and Madeleine Udell},
  journal={Proceedings of the Conference on Fairness, Accountability, and Transparency},
  year={2019}
}
Assessing the fairness of a decision making system with respect to a protected class, such as gender or race, is challenging when class membership labels are unavailable. Probabilistic models for predicting the protected class based on observable proxies, such as surname and geolocation for race, are sometimes used to impute these missing labels for compliance assessments. Empirically, these methods are observed to exaggerate disparities, but the reason why is unknown. In this paper, we… Expand
Assessing algorithmic fairness with unobserved protected class using data combination
TLDR
This paper studies a fundamental challenge to assessing disparate impacts, or performance disparities in general, in practice: protected class membership is often not observed in the data, particularly in lending and healthcare, and provides optimization-based algorithms for computing and visualizing sets of simultaneously achievable pairwise disparties for assessing disparities. Expand
Unaware Fairness: Hierarchical Random Forest for Protected Classes
  • Xian Li
  • Computer Science, Mathematics
  • ArXiv
  • 2021
TLDR
A hierarchical random forest model for prediction without explicitly involving protected classes is proposed and an example is analyzed from Boston police interview records to illustrate the usefulness of the proposed model. Expand
Measuring Fairness under Unawareness via Quantification
TLDR
This work tackles the problem of measuring group fairness under unawareness of sensitive attributes, by using techniques from quantification, a supervised learning task concerned with directly providing group-level prevalence estimates (rather than individual-level class labels). Expand
Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
TLDR
This work studies using a proxy for the covariate variable and presents a theoretical analysis that aims to characterize weaker conditions under which accurate fairness evaluation is possible and expands the understanding of scenarios where measuring model fairness via proxies can be an effective approach. Expand
Fair Transfer Learning with Missing Protected Attributes
TLDR
This paper proposes two new weighting methods: prevalence-constrained covariate shift (PCCS) which does not require protected attributes in the target domain and target-fair covariates shift (TFCS), which doesNot require protected Attributes in the source domain and empirically demonstrates their efficacy in two applications. Expand
Equalized odds postprocessing under imperfect group information
TLDR
This paper investigates to what extent fairness interventions can be effective even when only imperfect information about the protected attribute is available, and identifies conditions on the perturbation that guarantee that the bias of a classifier is reduced even by running equalized odds with the perturbed attribute. Expand
It's COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks
TLDR
It is shown that pretrial RAI datasets contain numerous measurement biases and errors inherent to CJ pretrial evidence and due to disparities in discretion and deployment, are limited in making claims about real-world outcomes, making the datasets a poor fit for benchmarking under assumptions of ground truth and real- world impact. Expand
Evaluating Fairness of Machine Learning Models Under Uncertain and Incomplete Information
TLDR
This work experimentally demonstrates that the test accuracy of the attribute classifier is not always correlated with its effectiveness in bias estimation for a downstream model, and develops heuristics for both training and using attribute classifiers for bias estimation in the data scarce regime. Expand
The fallacy of equating "blindness" with fairness : ensuring trust in machine learning applications to consumer credit
TLDR
The idea that “blindness” to certain attributes hinders consumer fairness more than it helps since it limits the ability to determine whether wrongful discrimination has occurred and to build better performing models for populations that have been historically underscored is investigated. Expand
The Fairness of Risk Scores Beyond Classification: Bipartite Ranking and the xAUC Metric
TLDR
This paper introduces the xAUC disparity as a metric to assess the disparate impact of risk scores and defines it as the difference in the probabilities of ranking a random positive example from one protected group above a negative one from another group and vice versa. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 39 REFERENCES
Assessing Fair Lending Risks Using Race/Ethnicity Proxies
  • Yan Zhang
  • Computer Science, Business
  • Manag. Sci.
  • 2018
TLDR
In assessing fair lending risks where classification of race/ethnicity is called for, the BISG maximum classification is proposed, which produces a more accurate estimation of mortgage pricing disparities than the current practices. Expand
Fairness-Aware Classifier with Prejudice Remover Regularizer
TLDR
A regularization approach is proposed that is applicable to any prediction algorithm with probabilistic discriminative models and applied to logistic regression and empirically show its effectiveness and efficiency. Expand
Fairness through awareness
TLDR
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented. Expand
Fairness in Criminal Justice Risk Assessments: The State of the Art
Objectives: Discussions of fairness in criminal justice risk assessments typically lack conceptual precision. Rhetoric too often substitutes for careful analysis. In this article, we seek to clarifyExpand
Fairness Constraints: Mechanisms for Fair Classification
TLDR
This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy. Expand
Does mitigating ML's impact disparity require treatment disparity?
TLDR
This paper shows that when sensitive and (nominally) nonsensitive features are correlated, DLPs will indirectly implement treatment disparity, undermining the policy desiderata they are designed to address and in general,DLPs provide suboptimal trade-offs between accuracy and impact parity. Expand
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
TLDR
It is demonstrated that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups, and how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance. Expand
Erratum to: Using the Census Bureau’s surname list to improve estimates of race/ethnicity and associated disparities
Commercial health plans need member racial/ethnic information to address disparities, but often lack it. We incorporate the U.S. Census Bureau’s latest surname list into a previous Bayesian methodExpand
Using the Census Bureau’s surname list to improve estimates of race/ethnicity and associated disparities
Commercial health plans need member racial/ethnic information to address disparities, but often lack it. We incorporate the U.S. Census Bureau’s latest surname list into a previous Bayesian methodExpand
Reverse Regression: The Algebra of Discrimination
How should discrimination in the marketplace be defined? The answer is less obvious than it might seem. As Conway and Roberts (1983) suggest, there are various definitions. The data may suggest thatExpand
...
1
2
3
4
...