Fair classification and social welfare

@article{Hu2019FairCA,
  title={Fair classification and social welfare},
  author={Lily Hu and Yiling Chen},
  journal={Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency},
  year={2019}
}
  • Lily HuYiling Chen
  • Published 1 May 2019
  • Economics
  • Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
Now that machine learning algorithms lie at the center of many important resource allocation pipelines, computer scientists have been unwittingly cast as partial social planners. Given this state of affairs, important questions follow. How do leading notions of fairness as defined by computer scientists map onto longer-standing notions of social welfare? In this paper, we present a welfare-based analysis of fair classification regimes. Our main findings assess the welfare impact of fairness… 

Figures from this paper

Pareto Efficient Fairness in Supervised Learning: From Extraction to Tracing

This paper proposes Pareto efficient Fairness (PEF) as a suitable fairness notion for supervised learning, that can ensure the optimal trade-off between overall loss and other fairness criteria, and empirically demonstrates the effectiveness of the PEF solution and the extracted Pare to frontier on real-world datasets.

Uncertainty and the Social Planner’s Problem: Why Sample Complexity Matters

Welfare measures overall utility across a population, whereas malfare measures overall disutility, and the social planner’s problem can be cast either as maximizing the former or minimizing the

Fairness through Social Welfare Optimization

We propose social welfare optimization as a general paradigm for formalizing fairness in AI systems. We argue that optimization models allow formulation of a wide range of fairness criteria as social

How fair can we go in machine learning? Assessing the boundaries of accuracy and fairness

A novel methodology is presented to explore the tradeoff in terms of a Pareto front between accuracy and fairness, and proposes a multiobjective framework that seeks to optimize both measures.

Balancing Competing Objectives with Noisy Data: Score-Based Classifiers for Welfare-Aware Machine Learning

The theoretical results characterize the optimal strategies in this class of policies, bound the Pareto errors due to inaccuracies in the scores, and show an equivalence between optimal strategies and a rich class of fairness-constrained profit-maximizing policies.

An Axiomatic Theory of Provably-Fair Welfare-Centric Machine Learning

We address an inherent difficulty in welfare-theoretic fair machine learning (ML), by proposing an equivalently-axiomatically justified alternative setting, and studying the resulting computational

Fairness in Machine Learning

It is shown how causal Bayesian networks can play an important role to reason about and deal with fairness, especially in complex unfairness scenarios, and how optimal transport theory can be leveraged to develop methods that impose constraints on the full shapes of distributions corresponding to different sensitive attributes.

Fairness through Optimization

It is shown how optimization models can assist fairness-oriented decision making in the context of neural networks, support vector machines, and rule-based systems by maximizing a social welfare function subject to appropriate constraints.

Two-sided fairness in rankings via Lorenz dominance

This work proposes to generate rankings by maximizing concave welfare functions, and develops an efficient inference procedure based on the Frank-Wolfe algorithm that guarantees that rankings are Pareto efficient, and that they maximally redistribute utility from better-off to worse-off, at a given level of overall utility.

Towards a Fairness-Aware Scoring System for Algorithmic Decision-Making

The proposed framework provides a suitable solution to address group fairness concerns in the development of scoring systems and enables policymakers to set and customize their desired fairness requirements as well as other application-specific constraints.
...

References

SHOWING 1-10 OF 42 REFERENCES

Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making

This work provides both heuristic justification and empirical evidence suggesting that a lower-bound on the welfare-based measures often leads to bounded inequality in algorithmic outcomes; hence presenting the first computationally feasible mechanism for bounding individual-level inequality.

Fairness Constraints: Mechanisms for Fair Classification

This paper introduces a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness, and shows on real-world data that this mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.

Algorithmic Fairness and the Social Welfare Function

It is argued that it would be beneficial to model fairness and algorithmic bias more holistically, including both a generative model of the underlying social phenomena and a description of a global welfare function.

The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning

It is argued that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce, rather than requiring that algorithms satisfy popular mathematical formalizations of fairness.

Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

It is proved that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.

Fairness in Learning: Classic and Contextual Bandits

A tight connection between fairness and the KWIK (Knows What It Knows) learning model is proved: a provably fair algorithm for the linear contextual bandit problem with a polynomial dependence on the dimension, and a worst-case exponential gap in regret between fair and non-fair learning algorithms.

Learning Fair Representations

We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the

Empirical Risk Minimization under Fairness Constraints

This work presents an approach based on empirical risk minimization, which incorporates a fairness constraint into the learning problem, and derives both risk and fairness bounds that support the statistical consistency of the approach.

Delayed Impact of Fair Machine Learning

It is demonstrated that even in a one-step feedback model, common fairness criteria in general do not promote improvement over time, and may in fact cause harm in cases where an unconstrained objective would not.

Fairness through awareness

A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.