Post-processing for Individual Fairness
@inproceedings{Petersen2021PostprocessingFI, title={Post-processing for Individual Fairness}, author={Felix Petersen and Debarghya Mukherjee and Yuekai Sun and Mikhail Yurochkin}, booktitle={NeurIPS}, year={2021} }
Post-processing in algorithmic fairness is a versatile approach for correcting bias in ML systems that are already used in production. The main appeal of postprocessing is that it avoids expensive retraining. In this work, we propose general post-processing algorithms for individual fairness (IF). We consider a setting where the learner only has access to the predictions of the original model and a similarity graph between individuals guiding the desired fairness constraints. We cast the IF…
3 Citations
Domain Adaptation meets Individual Fairness. And they get along
- Computer ScienceArXiv
- 2022
It is shown that enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models, and that it is possible to adapt representation alignment methods for domain adaptation to enforce (individual) fairness.
Gerrymandering Individual Fairness
- Computer Science
- 2022
It will be proved that gerrymandering individual fairness in the context of predicting scores is possible and it will be argued that individual fairness provides a very weak notion of fairness for some choices of feature space and metric.
Gerrymandering Individual Fairness
- Computer Science
- 2022
It will be proved that gerrymandering individual fairness in the context of predicting scores is possible and it will be argued that individual fairness provides a very weak notion of fairness for some choices of feature space and metric.
References
SHOWING 1-10 OF 43 REFERENCES
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- Computer ScienceNAACL
- 2019
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting
- Psychology, Computer ScienceFAT
- 2019
A large-scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples' lives, and the impact on occupation classification of including explicit gender indicators in different semantic representations of online biographies.
Fairness through awareness
- Computer ScienceITCS '12
- 2012
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜
- Computer ScienceFAccT
- 2021
Recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, and carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values are provided.
SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness
- Computer ScienceICLR
- 2021
The theoretical results guarantee the proposed approach trains certifiably fair ML models and improved fairness metrics are demonstrated in comparison to several recent fair training procedures on three ML tasks that are susceptible to algorithmic bias.
Two Simple Ways to Learn Individual Fairness Metrics from Data
- Computer ScienceICML
- 2020
This paper shows empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases and provides theoretical guarantees on the statistical performance of both approaches.
An Algorithmic Framework for Fairness Elicitation
- Computer ScienceFORC
- 2021
This work introduces a framework in which pairs of individuals can be identified as requiring (approximately) equal treatment under a learned model, or requiring ordered treatment such as "applicant Alice should be at least as likely to receive a loan as applicant Bob".
Individually Fair Gradient Boosting
- Computer ScienceICLR
- 2021
This work considers the task of enforcing individual fairness in gradient boosting and develops a functional gradient descent on a (distributionally) robust loss function that encodes the intuition of algorithmic fairness for the ML task at hand.
Priority-based Post-Processing Bias Mitigation for Individual and Group Fairness
- Computer ScienceArXiv
- 2021
A priority-based postprocessing bias mitigation on both group and individual fairness with the notion that similar individuals should get similar outcomes irrespective of socio-economic factors and more the unfairness, more the injustice is established by a case study on tariff allotment in a smart grid.
StereoSet: Measuring stereotypical bias in pretrained language models
- Computer ScienceACL
- 2021
StereoSet, a large-scale natural English dataset to measure stereotypical biases in four domains: gender, profession, race, and religion, is presented and it is shown that popular models like BERT, GPT-2, RoBERTa, and XLnet exhibit strong stereotypical biases.