Empirical observation of negligible fairness-accuracy trade-offs in machine learning for public policy

@article{Rodolfa2021EmpiricalOO,
  title={Empirical observation of negligible fairness-accuracy trade-offs in machine learning for public policy},
  author={Kit T. Rodolfa and Hemank Lamba and Rayid Ghani},
  journal={Nat. Mach. Intell.},
  year={2021},
  volume={3},
  pages={896-904}
}
Growing use of machine learning in policy and social impact settings have raised concerns for fairness implications, especially for racial minorities. These concerns have generated considerable interest among machine learning and artificial intelligence researchers, who have developed new methods and established theoretical bounds for improving fairness, focusing on the source data, regularization and model training, or post-hoc adjustments to model scores. However, little work has studied the… 
Model Multiplicity: Opportunities, Concerns, and Solutions
TLDR
This work investigates how to take advantage of the flexibility afforded by model multiplicity while addressing the concerns with justifiability that it might raise, and demonstrates that there are many different ways of making equally accurate predictions.
It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy
TLDR
It is found that black-box models may be as explainable to a HITL as interpretable models and two possible reasons are identified that more information about a model may confuse users, leading them to perform worse on objectively measurable explainability tasks.
Survey on Fair Reinforcement Learning: Theory and Practice
TLDR
An algorithm based on EXP3 [10] which attains sub-linear bounds for both cumulative regret and fairness-regret if the adversary is restrained to only assign certain kind of evaluations is proposed.
Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making
While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right
Construction and Optimization of Mental Health Education Consultation Management System Based on Decision Tree Association Rule Mining
TLDR
A new calculation formula for the similarity of the child nodes of the label set is proposed to evaluate the effect of attribute classification, and comprehensively consider the situation that the elements in the two multilabel sets appear or not appear at the same time, which can improve the problems existing in the work of psychological counselling services.
Towards a Standard for Identifying and Managing Bias in Artificial Intelligence
TLDR
To successfully manage the risks of AI bias the authors must operationalize values and create new norms around how AI is built and deployed, according to experts in the area of Trustworthy and Responsible AI.
Fairness implications of encoding protected categorical attributes
TLDR
This work compares the accuracy and fairness implications of the two most well-known encoders: one-hot encoding and target encoding and distinguishes between two types of induced bias that can arise while using these encodings and can lead to unfair models.
FADE: FAir Double Ensemble Learning for Observable and Counterfactual Outcomes
TLDR
A flexible framework for fair ensemble learning that allows users to efficiently explore the fairness-accuracy space or to improve the fairness or accuracy of a benchmark model, and it enables users to combine a large number of previously trained and newly trained predictors.

References

SHOWING 1-10 OF 39 REFERENCES
Predictive Analytics for Retention in Care in an Urban HIV Clinic
TLDR
A machine learning model is developed to identify patients at risk for dropping out of care in an urban HIV care clinic using electronic medical records and geospatial data and performs better than the previous state-of-the-art logistic regression model.
Case study: predictive fairness to reduce misdemeanor recidivism through social service interventions
TLDR
The equity outcomes sought are discussed, the corresponding choice of a metric for measuring predictive fairness in this context is described, and a set of options for balancing equity and efficiency when building and selecting machine learning models in an operational public policy setting are explored.
Dissecting racial bias in an algorithm used to manage the health of populations
TLDR
It is suggested that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
Using machine learning to help vulnerable tenants in New York city
TLDR
A machine learning model can potentially help TSU find 59% more buildings where tenants face landlord harassment than the current outreach method using the same resources, and highlight the factors that help predict the risk of experiencing tenant harassment.
Mitigating bias in algorithmic hiring: evaluating claims and practices
TLDR
This work identifies vendors of algorithmic pre-employment assessments (i.e., algorithms to screen candidates), document what they have disclosed about their development and validation procedures, and evaluates their practices, focusing particularly on efforts to detect and mitigate bias.
People with Complex Needs and the Criminal Justice System
Abstract Efforts to enhance efficiency in service provision have produced increasingly sophisticated targeting in the various human service domains. In the context of changing demographics, the
Mental Health Problems of Prison and Jail Inmates: (557002006-001)
At midyear 2005 more than half of all prison and jail inmates had a mental health problem, including 705,600 inmates in State prisons, 70,200 in Federal prisons, and 479,900 in local jails. These
Racial Equity in Algorithmic Criminal Justice
Algorithmic tools for predicting violence and criminality are being used more and more in policing, bail, and sentencing. Scholarly attention to date has focused on their procedural due process
An Empirical Study of Rich Subgroup Fairness for Machine Learning
TLDR
In general, the Kearns et al. algorithm converges quickly, large gains in fairness can be obtained with mild costs to accuracy, and that optimizing accuracy subject only to marginal fairness leads to classifiers with substantial subgroup unfairness.
RISK, RACE, AND RECIDIVISM: PREDICTIVE BIAS AND DISPARATE IMPACT*: RISK, RACE, AND RECIDIVISM
One way to unwind mass incarceration without compromising public safety is to use risk assessment instruments in sentencing and corrections. Although these instruments figure prominently in current
...
...