Empirical observation of negligible fairness-accuracy trade-offs in machine learning for public policy

@article{Rodolfa2021EmpiricalOO,
  title={Empirical observation of negligible fairness-accuracy trade-offs in machine learning for public policy},
  author={Kit T. Rodolfa and Hemank Lamba and Rayid Ghani},
  journal={Nat. Mach. Intell.},
  year={2021},
  volume={3},
  pages={896-904}
}
Growing use of machine learning in policy and social impact settings have raised concerns for fairness implications, especially for racial minorities. These concerns have generated considerable interest among machine learning and artificial intelligence researchers, who have developed new methods and established theoretical bounds for improving fairness, focusing on the source data, regularization and model training, or post-hoc adjustments to model scores. However, little work has studied the… 
Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making
While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right
Construction and Optimization of Mental Health Education Consultation Management System Based on Decision Tree Association Rule Mining
TLDR
A new calculation formula for the similarity of the child nodes of the label set is proposed to evaluate the effect of attribute classification, and comprehensively consider the situation that the elements in the two multilabel sets appear or not appear at the same time, which can improve the problems existing in the work of psychological counselling services.
Fairness implications of encoding protected categorical attributes
TLDR
This work compares the accuracy and fairness implications of the two most well-known encoders: one-hot encoding and target encoding and distinguishes between two types of induced bias that can arise while using these encodings and can lead to unfair models.
Towards a Standard for Identifying and Managing Bias in Artificial Intelligence
TLDR
To successfully manage the risks of AI bias the authors must operationalize values and create new norms around how AI is built and deployed, according to experts in the area of Trustworthy and Responsible AI.
FADE: FAir Double Ensemble Learning for Observable and Counterfactual Outcomes
TLDR
This work develops a flexible framework for fair ensemble learning that allows users to efficiently explore the fairness-accuracy space or to improve the fairness or accuracy of a benchmark model, and enables users to combine a large number of previously trained and newly trained predictors.

References

SHOWING 1-10 OF 39 REFERENCES
dssg/peeps-chili: Release for trade-offs submission
  • 2021
Case study: predictive fairness to reduce misdemeanor recidivism through social service interventions
TLDR
The equity outcomes sought are discussed, the corresponding choice of a metric for measuring predictive fairness in this context is described, and a set of options for balancing equity and efficiency when building and selecting machine learning models in an operational public policy setting are explored.
Mitigating bias in algorithmic hiring: evaluating claims and practices
TLDR
This work identifies vendors of algorithmic pre-employment assessments (i.e., algorithms to screen candidates), document what they have disclosed about their development and validation procedures, and evaluates their practices, focusing particularly on efforts to detect and mitigate bias.
Predictive Analytics for Retention in Care in an Urban HIV Clinic
TLDR
A machine learning model is developed to identify patients at risk for dropping out of care in an urban HIV care clinic using electronic medical records and geospatial data and performs better than the previous state-of-the-art logistic regression model.
A comparative study of fairness-enhancing interventions in machine learning
TLDR
It is found that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.
America's Promise Alliance
  • 2019
An Empirical Study of Rich Subgroup Fairness for Machine Learning
TLDR
In general, the Kearns et al. algorithm converges quickly, large gains in fairness can be obtained with mild costs to accuracy, and that optimizing accuracy subject only to marginal fairness leads to classifiers with substantial subgroup unfairness.
Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees
TLDR
A meta-algorithm for classification that can take as input a general class of fairness constraints with respect to multiple non-disjoint and multi-valued sensitive attributes, and which comes with provable guarantees is proposed.
Dissecting racial bias in an algorithm used to manage the health of populations
TLDR
It is suggested that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
Racial Equity in Algorithmic Criminal Justice
Algorithmic tools for predicting violence and criminality are being used more and more in policing, bail, and sentencing. Scholarly attention to date has focused on their procedural due process
...
1
2
3
4
...