When Personalization Harms: Reconsidering the Use of Group Attributes in Prediction

@article{Suriyakumar2022WhenPH,
  title={When Personalization Harms: Reconsidering the Use of Group Attributes in Prediction},
  author={Vinith M. Suriyakumar and Marzyeh Ghassemi and Berk Ustun},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.02058}
}
The standard approach to personalization in machine learning consists of training a model with group attributes like sex, age group, and blood type. In this work, we show that this approach to personalization fails to improve performance for all groups who provide personal data. We discuss how this effect inflicts harm in applications where models assign predictions on the basis of group membership. We propose collective preference guarantees to ensure the fair use of group attributes in… 

Participatory Systems for Personalized Prediction

  • Computer Science, Psychology
  • 2022
This work introduces a family of personalized prediction models called participatory systems that support informed consent, and presents a model- agnostic approach for supervised learning where personal data is encoded as “group" attributes.

Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey

This paper provides a comprehensive survey of bias mitigation methods for achieving fairness in Machine Learning (ML) models and investigates how existing bias mitigation Methods are evaluated in the literature.

References

SHOWING 1-10 OF 87 REFERENCES

Fairness without Harm: Decoupled Classifiers with Preference Guarantees

It is argued that when there is this kind of treatment disparity then it should be in the best interest of each group, and a recursive procedure is introduced that adaptively selects group attributes for decoupling to ensure preference guarantees in terms of generalization error.

Fairness With Minimal Harm: A Pareto-Optimal Approach For Healthcare

This work argues that even in domains where fairness at cost is required, finding a non-unnecessary-harm fairness model is the optimal initial step, and presents a methodology for training neural networks that achieve this goal by dynamically re-balancing subgroups risks.

Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions

This paper describes the perturbed distribution as a counterfactual distribution, and describes its properties for common fairness criteria, and discusses how the estimated distribution can be used to build a data preprocessor that can reduce disparate impact without training a new model.

Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment

A new notion of unfairness, disparate mistreatment, is introduced, defined in terms of misclassification rates, which is proposed for decision boundary-based classifiers and can be easily incorporated into their formulation as convex-concave constraints.

Model Cards for Model Reporting

This work proposes model cards, a framework that can be used to document any trained machine learning model in the application fields of computer vision and natural language processing, and provides cards for two supervised models: One trained to detect smiling faces in images, and one training to detect toxic comments in text.

FAIRVIS: Visual Analytics for Discovering Intersectional Bias in Machine Learning

FAIRVIS is a mixed-initiative visual analytics system that integrates a novel subgroup discovery technique for users to audit the fairness of machine learning models and demonstrates how interactive visualization may help data scientists and the general public understand and create more equitable algorithms.

Certifying and Removing Disparate Impact

This work links disparate impact to a measure of classification accuracy that while known, has received relatively little attention and proposes a test for disparate impact based on how well the protected class can be predicted from the other attributes.

Fairness through awareness

A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.

Fair classification and social welfare

This paper presents a welfare-based analysis of fair classification regimes and shows that applying stricter fairness criteria codified as parity constraints can worsen welfare outcomes for both groups.

Personalized Multitask Learning for Predicting Tomorrow's Mood, Stress, and Health

Empirical results demonstrate that using MTL to account for individual differences provides large performance improvements over traditional machine learning methods and provides personalized, actionable insights.
...