Corpus ID: 227248126

Black Loans Matter: Distributionally Robust Fairness for Fighting Subgroup Discrimination

@article{Weber2020BlackLM,
  title={Black Loans Matter: Distributionally Robust Fairness for Fighting Subgroup Discrimination},
  author={Mark Weber and Mikhail Yurochkin and Sherif M. Botros and V. K. Markov},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.01193}
}
Algorithmic fairness in lending today relies on group fairness metrics for monitoring statistical parity across protected groups. This approach is vulnerable to subgroup discrimination by proxy, carrying significant risks of legal and reputational damage for lenders and blatantly unfair outcomes for borrowers. Practical challenges arise from the many possible combinations and subsets of protected groups. We motivate this problem against the backdrop of historical and residual racism in the… Expand

Tables from this paper

A Proposal for Identifying and Managing Bias in Artificial Intelligence

References

SHOWING 1-10 OF 19 REFERENCES
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness
TLDR
It is proved that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses. Expand
Two Simple Ways to Learn Individual Fairness Metrics from Data
TLDR
This paper shows empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases and provides theoretical guarantees on the statistical performance of both approaches. Expand
Metric Learning for Individual Fairness
TLDR
This work proposes a solution to the problem of approximating a metric for Individual Fairness based on human judgments by assuming that the arbiter can answer a limited set of queries concerning similarity of individuals for a particular task, is free of explicit biases and possesses sufficient domain knowledge to evaluate similarity. Expand
Fairness through awareness
TLDR
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented. Expand
Consumer-Lending Discrimination in the Fintech Era
Discrimination in lending can occur either in face-to-face decisions or in algorithmic scoring. We provide a workable interpretation of the courts’ legitimate-business-necessity defense ofExpand
Multiaccuracy: Black-Box Post-Processing for Fairness in Classification
TLDR
It is proved that MULTIACCURACY-BOOST converges efficiently and it is shown that if the initial model is accurate on an identifiable subgroup, then the post-processed model will be also. Expand
An Empirical Study on Learning Fairness Metrics for COMPAS Data with Human Supervision
TLDR
This work gathers a new dataset of human judgments on a criminal recidivism prediction (COMPAS) task, and attempts to learn a similarity metric satisfying the individual fairness from human annotated data. Expand
A Reductions Approach to Fair Classification
TLDR
The key idea is to reduce fair classification to a sequence of cost-sensitive classification problems, whose solutions yield a randomized classifier with the lowest (empirical) error subject to the desired constraints. Expand
Calibration for the (Computationally-Identifiable) Masses
TLDR
A new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data is developed and studied, and it is shown that in many settings this strong notion of protection from discrimination is both attainable and aligned with the goal of obtaining accurate predictions. Expand
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
...
1
2
...