• Corpus ID: 230523769

Characterizing Intersectional Group Fairness with Worst-Case Comparisons

@inproceedings{Ghosh2021CharacterizingIG,
  title={Characterizing Intersectional Group Fairness with Worst-Case Comparisons},
  author={A. Ghosh and Lea Genuit and Mary Reagan},
  booktitle={AIDBEI},
  year={2021}
}
Machine Learning or Artificial Intelligence algorithms have gained considerable scrutiny in recent times owing to their propensity towards imitating and amplifying existing prejudices in society. This has led to a niche but growing body of work that identifies and attempts to fix these biases. A first step towards making these algorithms more fair is designing metrics that measure unfairness. Most existing work in this field deals with either a binary view of fairness (protected vs. unprotected… 

Figures and Tables from this paper

Bias Mitigation Framework for Intersectional Subgroups in Neural Networks

A fairness-aware learning framework that mitigates intersectional subgroup bias associated with protected attributes is proposed and it is shown that the models trained with this framework become causally fair and insensitive to the values of protected attributes.

Multi-dimensional discrimination in Law and Machine Learning - A comparative overview

This work overviews the different definitions of multi-dimensional discrimination/fairness in the legal domain as well as how they have been transferred/ operationalized in the fairness-aware machine learning domain and draws the connections, identifies the limitations, and point out open research directions.

Inherent Limitations of AI Fairness

In the past, the range of tasks that a computer could carry out was limited by what could be hard-coded by a programmer. Now, recent advances in machine learning make it possible to learn patterns

Using Positive Matching Contrastive Loss with Facial Action Units to mitigate bias in Facial Expression Recognition

  • Varsha SureshDesmond C. Ong
  • Computer Science
    2022 10th International Conference on Affective Computing and Intelligent Interaction (ACII)
  • 2022
Machine learning models automatically learn dis-criminative features from the data, and are therefore susceptible to learn strongly-correlated biases, such as using protected attributes like gender

An exploratory data analysis: the performance differences of a medical code prediction system on different demographic groups

Recent studies show that neural natural processing models for medical code prediction suffer from a label imbalance issue. This study aims to investigate further imbalance in a medical code

Fairness in Recommender Systems: Research Landscape and Future Directions

It is found that in many research works in computer science, very abstract problem operationalizations are prevalent and questions of the underlying normative claims and what represents a fair recommendation in the context of a given application are often not discussed in depth.

What's the Harm? Sharp Bounds on the Fraction Negatively Affected by Treatment

The fundamental problem of causal inference – that we never observe counterfactuals – prevents us from identifying how many might be negatively affected by a proposed intervention. If, in an A/B

De-biasing “bias” measurement

The “double-corrected” variance estimator is proposed, which provides unbiased estimates and uncertainty quantification of the variance of model performance across groups and is conceptually simple and easily implementable without statistical software package or numerical optimization.

Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation

This work grapple with questions that arise along three stages of the machine learning pipeline when incorporating intersectionality as multiple demographic attributes: which demographic attributes to include as dataset labels, how to handle the progressively smaller size of subgroups during model training, and how to move beyond existing evaluation metrics when benchmarking model fairness for more subgroups.

Subverting Fair Image Search with Generative Adversarial Perturbations

This work develops and then attacks a state-of-the-art, fairness-aware image search engine using images that have been maliciously modified using a Generative Adversarial Perturbation (GAP) model, demonstrating that these attacks are robust across a number of variables, that they have close to zero impact on the relevance of search results, and that they succeed under a strict threat model.

References

SHOWING 1-10 OF 38 REFERENCES

Brook

  • Chloe Trayhurn
  • Environmental Science
    Motherhood and Social Exclusion
  • 2019
The impact of various atmospheric transport directions on ambient fine particle (PM2.5) concentrations at several sites in southeastern Canada was estimated (for May–September) using back-trajectory

The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning

It is argued that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce, rather than requiring that algorithms satisfy popular mathematical formalizations of fairness.

Fairness of Exposure in Rankings

This work proposes a conceptual and computational framework that allows the formulation of fairness constraints on rankings in terms of exposure allocation, and develops efficient algorithms for finding rankings that maximize the utility for the user while provably satisfying a specifiable notion of fairness.

Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

It is proved that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.

Equality of Opportunity in Supervised Learning

This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.

On the apparent conflict between individual and group fairness

This paper draws on discussions from within the fair machine learning research and from political and legal philosophy to argue that individual and group fairness are not fundamentally in conflict, and outlines accounts of egalitarian fairness which encompass plausible motivations for both group and individual fairness.

A Survey on Bias and Fairness in Machine Learning

This survey investigated different real-world applications that have shown biases in various ways, and created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems.

Fairness-Aware Learning for Continuous Attributes and Treatments

This work exploits Witsenhausen’s characterization of the Rényi correlation coefficient to propose a differentiable implementation linked to f -divergences that allows fairness to be extented to variables such as mixed ethnic groups or financial status without thresholds effects.

Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search

This work presents a framework for quantifying and mitigating algorithmic bias in mechanisms designed for ranking individuals, typically used as part of web-scale search and recommendation systems, and is the first large-scale deployed framework for ensuring fairness in the hiring domain.