Human Decisions and Machine Predictions

@article{Kleinberg2018HumanDA,
  title={Human Decisions and Machine Predictions},
  author={Jon M. Kleinberg and Himabindu Lakkaraju and Jure Leskovec and Jens Ludwig and Sendhil Mullainathan},
  journal={Economics of Networks eJournal},
  year={2018}
}
Can machine learning improve human decision making? Bail decisions provide a good test case. Millions of times each year, judges make jail-or-release decisions that hinge on a prediction of what a defendant would do if released. The concreteness of the prediction task combined with the volume of data available makes this a promising machine-learning application. Yet comparing the algorithm to judges proves complicated. First, the available data are generated by prior judge decisions. We only… 

Human Decision Making with Machine Assistance

This study explores how receiving machine advice influences people's bail decisions and runs a vignette experiment with laypersons who are asked to predict whether defendants will recidivate before tried, and whether they have access to machine advice.

Human Decision Making with Machine Assistance: An Experiment on Bailing and Jailing

Much of political debate focuses on the concern that machines might take over. Yet in many domains it is much more plausible that the ultimate choice and responsibility remain with a human

Human Decision Making with Machine Advice : An Experiment on Bailing and Jailing

Much of political debate focuses on the concern that machines might take over. Yet in many domains it is much more plausible that the ultimate choice and responsibility remain with a human

The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables

This work develops an approach called contraction which allows us to compare the performance of predictive models and human decision-makers without resorting to counterfactual inference and demonstrates the utility of the evaluation metric in comparing human decisions and machine predictions.

Simple Rules for Complex Decisions

A new method-select-regress-and-round-for constructing simple rules that perform well for complex decisions, which significantly outperform judges and are on par with decisions derived from random forests trained on all available features.

Algorithms As Prosecutors: Lowering Rearrest Rates Without Disparate Impacts and Identifying Defendant Characteristics ‘Noisy’ to Human Decision-Makers

We investigate how machine learning might bring clarity to a human decisions made during the criminal justice process. We created a model that predicts a defendant’s risk of being rearrested after

On the Fairness of Machine-Assisted Human Decisions

It is shown in a formal model that the inclusion of a biased human decision-maker can revert common relationships between the structure of the algorithm and the qualities of resulting decisions, and that excluding information about protected groups from the prediction may fail to reduce, and may even increase, ultimate disparities.

Machine Predictions and Human Decisions with Variation in Payoffs and Skills

This work proposes a framework that incorporates machine learning on large-scale administrative data into a choice model featuring heterogeneity in decision maker payoff functions and predictive skill and applies this framework to the major health policy problem of improving the efficiency in antibiotic prescribing in primary care.

Algorithmic Recommendations and Human Discretion*

New quasi-experimental tools are developed to measure the impact of human discretion over an algorithm, even when the outcome of interest is only selectively observed, in the context of bail decisions, and show that the high-performing judges are more likely to use relevant private information and less likely to overreact to highly-salient events compared to the low- performing judges.

Experimental Evaluation of Computer-Assisted Human Decision-Making: Application to Pretrial Risk Assessment Instrument

This work develops a statistical methodology for experimentally evaluating the causal impacts of machine recommendations on human decisions and applies the proposed methodology to the randomized evaluation of a pretrial risk assessment instrument (PRAI) in the criminal justice system.
...

References

SHOWING 1-10 OF 68 REFERENCES

The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables

This work develops an approach called contraction which allows us to compare the performance of predictive models and human decision-makers without resorting to counterfactual inference and demonstrates the utility of the evaluation metric in comparing human decisions and machine predictions.

Simple Rules for Complex Decisions

A new method-select-regress-and-round-for constructing simple rules that perform well for complex decisions, which significantly outperform judges and are on par with decisions derived from random forests trained on all available features.

Using Regression Kernels to Forecast A Failure to Appear in Court

An overview of kernel methods in regression settings is given and a method, regularized with principle components, to stepwise logistic regression is compared, both to a timely and important criminal justice concern: a failure to appear at court proceedings following an arraignment.

An impact assessment of machine learning risk forecasts on parole board decisions and recidivism

Objectives :The Pennsylvania Board of Probation and Parole has begun using machine learning forecasts to help inform parole release decisions. In this paper, we evaluate the impact of the forecasts

NOISE: How to overcome the high, hidden cost of inconsistent decision making

Although algorithms may seem daunting to construct, the authors describe how to build them with input data on a small number of cases and some simple commonsense rules.

Learning Cost-Effective and Interpretable Treatment Regimes

This work proposes a novel objective to construct a decision list which maximizes outcomes for the population, and minimizes overall costs, and employs a variant of the Upper Confidence Bound for Trees strategy which leverages customized checks for pruning the search space effectively.

Forecasts of Violence to Inform Sentencing Decisions

An enhanced version of classifications trees available in R, with some technical enhancements to improve tree stability, is applied to provide a prototype procedure for making forecasts of future dangerousness that could be used to inform sentencing decisions when machine learning is not practical.

Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err

It is shown that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster, and this phenomenon, which is called algorithm aversion, is costly, and it is important to understand its causes.

Machine Learning: An Applied Econometric Approach

This work presents a way of thinking about machine learning that gives it its own place in the econometric toolbox, and aims to make them conceptually easier to use by providing a crisper understanding of how these algorithms work, where they excel, and where they can stumble.

Interpretable classification models for recidivism prediction

A recent method called supersparse linear integer models is used to produce accurate, transparent and interpretable scoring systems along the full ROC curve, which can be used for decision making for many different use cases.
...