Case study: predictive fairness to reduce misdemeanor recidivism through social service interventions

@article{Rodolfa2020CaseSP,
  title={Case study: predictive fairness to reduce misdemeanor recidivism through social service interventions},
  author={Kit T. Rodolfa and Erika Salomon and Lauren Haynes and Iv{\'a}n Higuera Mendieta and Jamie L Larson and Rayid Ghani},
  journal={Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency},
  year={2020}
}
The criminal justice system is currently ill-equipped to improve outcomes of individuals who cycle in and out of the system with a series of misdemeanor offenses. Often due to constraints of caseload and poor record linkage, prior interactions with an individual may not be considered when an individual comes back into the system, let alone in a proactive manner through the application of diversion programs. The Los Angeles City Attorney's Office recently created a new Recidivism Reduction and… 

Figures from this paper

An Empirical Comparison of Bias Reduction Methods on Real-World Problems in High-Stakes Policy Settings

A wide degree of variability and inconsistency in the ability of many of these methods to improve model fairness, but postprocessing by choosing group-specific score thresholds consistently removes disparities, with important implications for both the ML research community and practitioners deploying machine learning to inform consequential policy decisions.

Empirical observation of negligible fairness-accuracy trade-offs in machine learning for public policy

In every setting, it is found that explicitly focusing on achieving equity and using the proposed post-hoc disparity mitigation methods, fairness was substantially improved without sacrificing accuracy, challenging a commonly held assumption that reducing disparities either requires accepting an appreciable drop in accuracy or the development of novel, complex methods.

Fairness and bias correction in machine learning for depression prediction: results from four different study populations

A significant level of stigma and inequality exists in mental healthcare, especially in under-served populations, which spreads through collected data. When not properly accounted for, machine

Social impacts of algorithmic decision-making: A research agenda for the social sciences

Academic and public debates are increasingly concerned with the question whether and how algorithmic decision-making (ADM) may reinforce social inequality. Most previous research on this topic

The Forgotten Margins of AI Ethics

How has recent AI Ethics literature addressed topics such as fairness and justice in the context of continued social and structural power asymmetries? We trace both the historical roots and current

Fairness and Sequential Decision Making: Limits, Lessons, and Opportunities

This paper compares and discusses work across two major subsets of this literature: algorithmic fairness, which focuses primarily on predictive systems, and ethical decision making, which focus primarily on sequential decision making and planning.

Criteria for algorithmic fairness metric selection under different supervised classification scenarios

A clustering of metrics enabled fairness metric selection and fostered general recommendations on the matter, and should help nourish an ongoing and context-specific discussion on algorithmic fairness, within and outside of the research community.

A Conceptual Framework for Using Machine Learning to Support Child Welfare Decisions

Human services systems make key decisions that impact individuals in the society. The U.S. child welfare system makes such decisions, from screening-in hotline reports of suspected abuse or neglect

Fairness in Contextual Resource Allocation Systems: Metrics and Incompatibility Results

A framework for evaluating fairness in contextual resource allocation systems that is inspired by fairness metrics in machine learning is proposed, which can be applied to evaluate the fairness properties of a historical policy, as well as to impose constraints in the design of new (counterfactual) allocation policies.

Machine learning for public policy: Do we need to sacrifice accuracy to make models fair?

In each setting, explicitly focusing on achieving equity and using the proposed post-hoc disparity mitigation methods, fairness was substantially improved without sacrificing accuracy, challenging the commonly held assumption that reducing disparities either requires accepting an appreciable drop in accuracy or the development of novel, complex methods.

References

SHOWING 1-10 OF 65 REFERENCES

Risk, Race, & Recidivism: Predictive Bias and Disparate Impact

One way to unwind mass incarceration without compromising public safety is to use risk assessment instruments in sentencing and corrections. Although these instruments figure prominently in current

RISK, RACE, AND RECIDIVISM: PREDICTIVE BIAS AND DISPARATE IMPACT*: RISK, RACE, AND RECIDIVISM

One way to unwind mass incarceration without compromising public safety is to use risk assessment instruments in sentencing and corrections. Although these instruments figure prominently in current

Case management and recidivism of mentally ill persons released from jail.

Hopes are found that expanding access to case management, both inside and outside jail, will help mentally ill people live in their communities and stay out of jail.

A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions

This paper describes the work on developing, validating, fairness auditing, and deploying a risk prediction model in Allegheny County, PA, USA, and discusses the results and highlights key problems and data bias issues that present challenges for model evaluation and deployment.

Risk as a Proxy for Race

Today, an increasing chorus argues that risk-assessment instruments are a politically feasible way to resolve our problem of mass incarceration and reduce prison populations. In this essay, I argue

Pretrial Court Diversion of People with Mental Illness

In summary, research has not yet yielded generalizable knowledge about diversion and thus, it is suggested that evaluations should involve well-defined indicators, benchmarks, and outcomes.

People with Complex Needs and the Criminal Justice System

Abstract Efforts to enhance efficiency in service provision have produced increasingly sophisticated targeting in the various human service domains. In the context of changing demographics, the

Racial Equity in Algorithmic Criminal Justice

Algorithmic tools for predicting violence and criminality are being used more and more in policing, bail, and sentencing. Scholarly attention to date has focused on their procedural due process

Fair prediction with disparate impact: A study of bias in recidivism prediction instruments

It is demonstrated that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups, and how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.

Bias In, Bias Out

Police, prosecutors, judges, and other criminal justice actors increasingly use algorithmic risk assessment to estimate the likelihood that a person will commit future crime. As many scholars have
...