Corpus ID: 231846960

The Limits of Computation in Solving Equity Trade-Offs in Machine Learning and Justice System Risk Assessment

  title={The Limits of Computation in Solving Equity Trade-Offs in Machine Learning and Justice System Risk Assessment},
  author={J. Russell},
  • J. Russell
  • Published 2021
  • Computer Science, Mathematics
  • ArXiv
This paper explores how different ideas of racial equity in machine learning, in justice settings in particular, can present trade-offs that are difficult to solve computationally. Machine learning is often used in justice settings to create risk assessments, which are used to determine interventions, resources, and punitive actions. Overall aspects and performance of these machine learningbased tools, such as distributions of scores, outcome rates by levels, and the frequency of false… Expand

Figures from this paper


The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
It is argued that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce, rather than requiring that algorithms satisfy popular mathematical formalizations of fairness. Expand
Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment
It is argued that a core ethical debate surrounding the use of regression in risk assessments is not simply one of bias or accuracy, but rather, one of purpose, and an alternative application of machine learning and causal inference away from predicting risk scores to risk mitigation is proposed. Expand
Fairness in Criminal Justice Risk Assessments: The State of the Art
Objectives: Discussions of fairness in criminal justice risk assessments typically lack conceptual precision. Rhetoric too often substitutes for careful analysis. In this article, we seek to clarifyExpand
Algorithmic Decision Making and the Cost of Fairness
This work reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities, and also to human decision makers carrying out structured decision rules. Expand
The Public Safety Assessment: A Re-Validation and Assessment of Predictive Utility and Differential Prediction by Race and Gender in Kentucky
In this paper, we assess the predictive validity and differential prediction by race and gender of one pretrial risk assessment, the Public Safety Assessment (PSA). The PSA was developed with supportExpand
On Fairness and Calibration
It is shown that calibration is compatible only with a single error constraint, and that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier. Expand
Big Data's Disparate Impact
Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with.Expand
A large-scale analysis of racial disparities in police stops across the United States
It is found that black drivers were less likely to be stopped after sunset, when a ‘veil of darkness’ masks one’s race, suggesting bias in stop decisions and evidence that the bar for searching black and Hispanic drivers was lower than that for searching white drivers. Expand
A critical examination of "being Black" in the juvenile justice system.
The findings show that Black youth received disadvantaged court outcomes at 2 of the 3 stages, even after balancing both groups on a number of confounders, and highlight the importance of utilizing a more stringent statistical model to control for selection bias. Expand
Juvenile Incarceration, Human Capital and Future Crime: Evidence from Randomly-Assigned Judges
Over 130,000 juveniles are detained in the US each year with 70,000 in detention on any given day, yet little is known whether such a penalty deters future crime or interrupts social and humanExpand