Individual Fairness in Pipelines

@article{Dwork2020IndividualFI,
  title={Individual Fairness in Pipelines},
  author={Cynthia Dwork and Christina Ilvento and Meena Jagadeesan},
  journal={ArXiv},
  year={2020},
  volume={abs/2004.05167}
}
It is well understood that a system built from individually fair components may not itself be individually fair. In this work, we investigate individual fairness under pipeline composition. Pipelines differ from ordinary sequential or repeated composition in that individuals may drop out at any stage, and classification in subsequent stages may depend on the remaining "cohort" of individuals. As an example, a company might hire a team for a new project and at a later point promote the highest… 

Tables from this paper

On Fairness and Stability in Two-Sided Matchings
TLDR
Efficient new algorithms for finding fair and stable matchings when: (i) the hospitals’ preferences are fair, and (ii) the fairness metric satisfies a strong “proto-metric” condition: the distance between every two doctors is either zero or one.
Multi Stage Screening: Enforcing Fairness and Maximizing Efficiency in a Pre-Existing Pipeline
TLDR
Algorithms for satisfying Equal Opportunity over the selection process and maximizing precision (the fraction of interviews that yield qualified candidates) as well as linear combinations of precision and recall are exhibited.
Achieving Downstream Fairness with Geometric Repair
TLDR
It is argued that fairer classification outcomes can be produced through the development of setting-speci fic interventions, and it is shown that attaining distributional parity minimizes rate disparities across all thresholds in the up/downstream setting.
Fair and Optimal Cohort Selection for Linear Utilities
TLDR
This work introduces a specific instance of cohort selection where the goal is to choose a cohort maximizing a linear utility function and gives approximately optimal polynomial-time algorithms for this problem in both an offline setting where the entire fair classifier is given at once, or an online setting where candidates arrive one at a time and are classified as they arrive.
Fairness Through Counterfactual Utilities
TLDR
This work derives two fairness principles that enable a generalized set of group fairness definitions that unambiguously extend to all machine learning environments while still retaining their original fairness notions and provides concrete examples of how this framework resolves known fairness issues in classification, clustering, and reinforcement learning problems.
Beyond Individual and Group Fairness
TLDR
A new data-driven model of fairness is presented that, unlike existing static definitions of individual or group fairness, is guided by the unfairness complaints received by the system and takes into account their potential incompatibilities.
An Introduction to Algorithmic Fairness
TLDR
Several ways in which machine learning can result in discrimination are introduced, and notions of fairness proposed in the computer science literature are discussed, and some of the underlying causes of unfair predictions are explored.
Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions
TLDR
The need for new fairness metrics that account for the impact on different population groups apart from just the disparity between groups is highlighted, and a list of combinations of interventions that perform best for different fairness and utility metrics are offered to aid the design of fair ML pipelines.
Learning Certified Individually Fair Representations
TLDR
This work introduces the first method which generalizes individual fairness to rich similarity notions via logical constraints while also enabling data consumers to obtain fairness certificates for their models through representation learning.
Fairness On The Ground: Applying Algorithmic Fairness Approaches to Production Systems
TLDR
It is hoped the experience integrating fairness tools and approaches into large-scale and complex production systems will be useful to other practitioners facing similar challenges, and illuminating to academics and researchers looking to better address the needs of practitioners.
...
...

References

SHOWING 1-10 OF 34 REFERENCES
Fairness Under Composition
TLDR
This work identifies pitfalls of naive composition and gives general constructions for fair composition, demonstrating both that classifiers that are fair in isolation do not necessarily compose into fair systems and also that seemingly unfair components may be carefully combined to construct fair systems.
Eliciting and Enforcing Subjective Individual Fairness
TLDR
A framework for fairness elicitation is considered, in which fairness is indirectly specified only via a sample of pairs of individuals who should be treated (approximately) equally on the task, and a provably convergent oracle-efficient algorithm is provided for minimizing error subject to the fairness constraints.
Fairness at Equilibrium in the Labor Market
TLDR
This work constructs a dual labor market model, in which firm strategies are constrained to ensure group-level fairness and a Permanent Labor Market, and shows that restrictions on hiring practices induces an equilibrium that Pareto-dominates those arising from strategies that employ statistical discrimination or a "group-blind" criterion.
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness
TLDR
It is proved that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.
Fair Pipelines
TLDR
This work facilitates ensuring fairness of machine learning in the real world by decoupling fairness considerations in compound decisions by studying how fairness propagates through a compound decision-making processes, which it is called a pipeline.
Fairness through awareness
TLDR
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.
Fairness Through Computationally-Bounded Awareness
TLDR
A general-purpose framework for learning a metric multifair hypothesis that achieves near-optimal loss from a small number of random samples from the metric $\delta$ and a new definition of fairness that is parameterized by a similarity metric on pairs of individuals and a collection of "comparison sets" over pairs of Individuals is proposed.
Inherent Trade-Offs in the Fair Determination of Risk Scores
TLDR
Some of the ways in which key notions of fairness are incompatible with each other are suggested, and hence a framework for thinking about the trade-offs between them is provided.
Online Learning with an Unknown Fairness Metric
TLDR
An algorithm in the adversarial context setting that has a number of fairness violations that depends only logarithmically on $T$, while obtaining an optimal $O(\sqrt{T})$ regret bound to the best fair policy is proposed.
Multi-category fairness in sponsored search auctions
TLDR
This work proposes inter-category and intra-category fairness desiderata that take inspiration from individual fairness and envy-freeness, and investigates the "platform utility" achievable by mechanisms satisfying these desidersata.
...
...