• Corpus ID: 235953913

Learning to Discriminate: The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing

  title={Learning to Discriminate: The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing},
  author={Benjamin Davies and Thomas Douglas},
It is often thought that traditional recidivism prediction tools used in criminal sentencing, though biased in many ways, can straightforwardly avoid one particularly pernicious type of bias: direct racial discrimination. They can avoid this by excluding race from the list of variables employed to predict recidivism. A similar approach could be taken to the design of newer, machine learning-based (ML) tools for predicting recidivism: information about race could be withheld from the ML tool… 
1 Citations
AI-risicotaxatie: nieuwe kansen en risico’s voor statistische voorspellingen van recidive
1. Inleiding Met name in de Verenigde Staten worden nieuwe risicotaxatie-instrumenten ontwikkeld die kunstmatige intelligentie (AI) toepassen om recidiverisico te voorspellen. AI-risicotaxatie is net


Machine Learning Forecasts of Risk to Inform Sentencing Decisions
There is now a substantial and compelling literature instatistics and computer science showing that machinelearning statistical procedures will forecast at least as well and typically more accurately, than older approaches commonly derived from various forms of regression analysis.
Evidence-Based Sentencing and the Scientific Rationalization of Discrimination
This paper critiques, on legal and empirical grounds, the growing trend of basing criminal sentences on actuarial recidivism risk prediction instruments that include demographic and socioeconomic
The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
It is argued that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce, rather than requiring that algorithms satisfy popular mathematical formalizations of fairness.
Racial Equity in Algorithmic Criminal Justice
Algorithmic tools for predicting violence and criminality are being used more and more in policing, bail, and sentencing. Scholarly attention to date has focused on their procedural due process
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
It is demonstrated that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups, and how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.
Big Data's Disparate Impact
Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with.
Accountable Algorithms
Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police
Why Unbiased Computational Processes Can Lead to Discriminative Decision Procedures
This chapter discusses the implicit modeling assumptions made by most data mining algorithms and shows situations in which they are not satisfied and outlines three realistic scenarios in which an unbiased process can lead to discriminatory models.
On the Ground of Her Sex (uality)
Discrimination is a virtue. If you keep befriending vain people, or falling in love with bullies, the explanation may be that you are insufficiently discriminating. In your thoughts, feelings and
What's Wrong with Machine Bias
This work motivates this puzzle and attempts an answer to how to account for the wrong these judgments engender without also indicting morally permissible statistical inferences about persons.