Fair classification and social welfare

@article{Hu2020FairCA,
  title={Fair classification and social welfare},
  author={Lily Hu and Yiling Chen},
  journal={Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency},
  year={2020}
}
  • Lily Hu, Yiling Chen
  • Published 1 May 2019
  • Computer Science, Mathematics
  • Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
Now that machine learning algorithms lie at the center of many important resource allocation pipelines, computer scientists have been unwittingly cast as partial social planners. Given this state of affairs, important questions follow. How do leading notions of fairness as defined by computer scientists map onto longer-standing notions of social welfare? In this paper, we present a welfare-based analysis of fair classification regimes. Our main findings assess the welfare impact of fairness… 
Balancing Competing Objectives with Noisy Data: Score-Based Classifiers for Welfare-Aware Machine Learning
TLDR
The theoretical results characterize the optimal strategies in this class of policies, bound the Pareto errors due to inaccuracies in the scores, and show an equivalence between optimal strategies and a rich class of fairness-constrained profit-maximizing policies.
Protecting the Protected Group: Circumventing Harmful Fairness
TLDR
The welfare-Equalizing approach provides a unified framework for discussing fairness in classification in the presence of a self-interested party and finds that the disadvantaged protected group can be worse off after imposing a fairness constraint.
An Axiomatic Theory of Provably-Fair Welfare-Centric Machine Learning
We address an inherent difficulty in welfare-theoretic fair ML, by proposing an equivalently-axiomatically justified alternative setting, and studying the resulting computational and statistical
Fairness in Machine Learning
TLDR
It is shown how causal Bayesian networks can play an important role to reason about and deal with fairness, especially in complex unfairness scenarios, and how optimal transport theory can be leveraged to develop methods that impose constraints on the full shapes of distributions corresponding to different sensitive attributes.
Two-sided fairness in rankings via Lorenz dominance
TLDR
This work proposes to generate rankings by maximizing concave welfare functions, and develops an efficient inference procedure based on the Frank-Wolfe algorithm that guarantees that rankings are Pareto efficient, and that they maximally redistribute utility from better-off to worse-off, at a given level of overall utility.
Welfare-based Fairness through Optimization
TLDR
It is argued that optimization models allow formulation of a wide range of fairness criteria as social welfare functions, while enabling AI to take advantage of highly advanced solution technology, and supports a broad perspective on fairness motivated by general distributive justice considerations.
Novel Concentration of Measure Bounds with Applications to Fairness in Machine Learning
I introduce novel concentration-of-measure bounds for the supremum deviation, several variance concepts, and a family of game-theoretic welfare functions. In particular, I introduce empirically
Algorithmic and Economic Perspectives on Fairness
TLDR
Algorithmic systems used to inform consequential decisions in medicine, medicine, criminal justice, facial recognition, lending and insurance, and the allocation of public services, are deployed to screen job applicants for the recommendation of products, people, and content.
Fairness On The Ground: Applying Algorithmic Fairness Approaches to Production Systems
TLDR
It is hoped the experience integrating fairness tools and approaches into large-scale and complex production systems will be useful to other practitioners facing similar challenges, and illuminating to academics and researchers looking to better address the needs of practitioners.
Fairness without Harm: Decoupled Classifiers with Preference Guarantees
TLDR
It is argued that when there is this kind of treatment disparity then it should be in the best interest of each group, and a recursive procedure is introduced that adaptively selects group attributes for decoupling to ensure preference guarantees in terms of generalization error.
...
1
2
3
4
...

References

SHOWING 1-10 OF 63 REFERENCES
Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making
TLDR
This work provides both heuristic justification and empirical evidence suggesting that a lower-bound on the welfare-based measures often leads to bounded inequality in algorithmic outcomes; hence presenting the first computationally feasible mechanism for bounding individual-level inequality.
Fairness Constraints: Mechanisms for Fair Classification
TLDR
This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.
Algorithmic Fairness and the Social Welfare Function
TLDR
It is argued that it would be beneficial to model fairness and algorithmic bias more holistically, including both a generative model of the underlying social phenomena and a description of a global welfare function.
The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
TLDR
It is argued that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce, rather than requiring that algorithms satisfy popular mathematical formalizations of fairness.
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness
TLDR
It is proved that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.
Fairness in Learning: Classic and Contextual Bandits
TLDR
A tight connection between fairness and the KWIK (Knows What It Knows) learning model is proved: a provably fair algorithm for the linear contextual bandit problem with a polynomial dependence on the dimension, and a worst-case exponential gap in regret between fair and non-fair learning algorithms.
Learning Fair Representations
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the
Empirical Risk Minimization under Fairness Constraints
TLDR
This work presents an approach based on empirical risk minimization, which incorporates a fairness constraint into the learning problem, and derives both risk and fairness bounds that support the statistical consistency of the approach.
Delayed Impact of Fair Machine Learning
TLDR
It is demonstrated that even in a one-step feedback model, common fairness criteria in general do not promote improvement over time, and may in fact cause harm in cases where an unconstrained objective would not.
Fairness through awareness
TLDR
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.
...
1
2
3
4
5
...