• Corpus ID: 237291764

Social Norm Bias: Residual Harms of Fairness-Aware Algorithms

@article{Cheng2021SocialNB,
  title={Social Norm Bias: Residual Harms of Fairness-Aware Algorithms},
  author={Myra Cheng and Maria De-Arteaga and Lester W. Mackey and Adam Tauman Kalai},
  journal={ArXiv},
  year={2021},
  volume={abs/2108.11056}
}
Many modern machine learning algorithms mitigate bias by enforc-ing fairness constraints across coarsely-defined groups related to a sensitive attribute like gender or race. However, these algorithms seldom account for within-group heterogeneity and biases that may disproportionately affect some members of a group. In this work, we characterize Social Norm Bias (SNoB), a subtle but consequen-tial type of algorithmic discrimination that may be exhibited by machine learning models, even when… 
1 Citations

Figures and Tables from this paper

More Data Can Lead Us Astray: Active Data Acquisition in the Presence of Label Bias

TLDR
This work empirically shows that, when overlooking label bias, collecting more data can aggravate bias, and imposing fairness constraints that rely on the observed labels in the data collection process may not address the problem.

References

SHOWING 1-10 OF 99 REFERENCES

Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness

TLDR
It is proved that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.

Bias Mitigation Post-processing for Individual and Group Fairness

TLDR
A novel framework including an individual bias detector used to prioritize data samples in a bias mitigation algorithm aiming to improve the group fairness measure of disparate impact shows superior performance in the combination of classification accuracy, individual fairness and group fairness on several real-world datasets.

Fairness through awareness

TLDR
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.

What's sex got to do with machine learning?

TLDR
It is suggested that formal diagrams of constitutive relations would present an entirely different path toward reasoning about discrimination because they proffer a model of how the meaning of a social group emerges from its constitutive features.

Towards a critical race methodology in algorithmic fairness

TLDR
It is argued that algorithmic fairness researchers need to take into account the multidimensionality of race, take seriously the processes of conceptualizing and operationalizing race, focus on social processes which produce racial inequality, and consider perspectives of those most affected by sociotechnical systems.

Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search

TLDR
This work presents a framework for quantifying and mitigating algorithmic bias in mechanisms designed for ranking individuals, typically used as part of web-scale search and recommendation systems, and is the first large-scale deployed framework for ensuring fairness in the hiring domain.

Decision Theory for Discrimination-Aware Classification

TLDR
The first and second solutions exploit the reject option of probabilistic classifier(s) and the disagreement region of general classifier ensembles to reduce discrimination and relate both solutions with decision theory for better understanding of the process.

Unsupervised Discovery of Implicit Gender Bias

TLDR
This work takes an unsupervised approach to identifying gender bias at a comment or sentence level, and presents a model that can surface text likely to contain bias, showing how biased comments directed towards female politicians contain mixed criticisms and references to their spouses.

Diversity and Inclusion Metrics in Subset Selection

TLDR
New metrics based on diversity and inclusion are introduced, which can be applied together, separately, and in tandem with additional fairness constraints, to create outputs that account for social power and access differentials.

CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models

TLDR
It is found that all three of the widely-used MLMs the authors evaluate substantially favor sentences that express stereotypes in every category in CrowS-Pairs, a benchmark for measuring some forms of social bias in language models against protected demographic groups in the US.
...