# Regression Under Human Assistance

@article{De2019RegressionUH, title={Regression Under Human Assistance}, author={Abir De and Paramita Koley and Niloy Ganguly and Manuel Gomez-Rodriguez}, journal={ArXiv}, year={2019}, volume={abs/1909.02963} }

Decisions are increasingly taken by both humans and machine learning models. However, machine learning models are currently trained for full automation—they are not aware that some of the decisions may still be taken by humans. In this paper, we take a first step towards the development of machine learning models that are optimized to operate under different automation levels. More specifically, we first introduce the problem of ridge regression under human assistance and show that it is NP…

## 28 Citations

### Classification Under Human Assistance

- Computer ScienceAAAI
- 2021

It is demonstrated that, under human assistance, supervised learning models trained to operate under different automation levels can outperform those trained for full automation as well as humans operating alone.

### Learning to Switch Between Machines and Humans

- Computer ScienceArXiv
- 2020

This work develops an algorithm that uses upper confidence bounds on the human policy to find a sequence of switching policies whose total regret with respect to the optimal switching policy is sublinear.

### Learning to Complement Humans

- Computer ScienceIJCAI
- 2020

This work demonstrates how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams by considering the distinct abilities of people and machines and analyze conditions under which this complementarity is strongest, and which training methods amplify it.

### Differentiable Learning Under Triage

- Computer ScienceNeurIPS
- 2021

This work starts by for-mally characterizing under which circumstances a predictive model may beneﬁt from algorithmic triage, and introduces a practical gradient-based algorithm that is guaranteed to guarantee a sequence of predictive models and triage policies of increasing performance.

### Towards Unbiased and Accurate Deferral to Multiple Experts

- Computer ScienceAIES
- 2021

This work proposes a framework that simultaneously learns a classifier and a deferredral system, with the deferral system choosing to defer to one or more human experts in cases of input where the classifier has low confidence.

### Consistent Estimators for Learning to Defer to an Expert

- Computer ScienceICML
- 2020

This paper explores how to learn predictors that can either predict or choose to defer the decision to a downstream expert, based on a novel reduction to cost sensitive learning that generalizes the cross entropy loss.

### Selective Classification Can Magnify Disparities Across Groups

- EconomicsICLR
- 2021

It is found that while selective classification can improve average accuracies, it can simultaneously magnify existing accuracy disparities between various groups within a population, especially in the presence of spurious correlations.

### Taking Advice from (Dis)Similar Machines: The Impact of Human-Machine Similarity on Machine-Assisted Decision-Making

- Computer ScienceHCOMP
- 2022

It is demonstrated that in practice decision aids that are not complementary, but make errors similar to human ones may have their own benefits, and that people perceive more similar decision aids as more useful, accurate, and predictable.

### Learning to Switch Among Agents in a Team via 2 -Layer Markov Decision Processes

- Computer Science
- 2022

This work formally addresses the problem of learning to switch control among agents in a team via a 2-layer Markov decision process and develops an online learning algorithm that uses upper conﬁdence bounds on the agents’ policies and the environment’s transition probabilities to develop a sequence of switching policies.

### Who Should Predict? Exact Algorithms For Learning to Defer to Humans

- Computer Science
- 2023

It is proved that obtaining a linear pair with low error is NP-hard even when the problem is realizable, and a mixed-integer-linear-programming (MILP) formulation is given that can optimally solve the problem in the linear setting.

## References

SHOWING 1-10 OF 40 REFERENCES

### The Algorithmic Automation Problem: Prediction, Triage, and Human Effort

- Computer ScienceArXiv
- 2019

It is argued here that automation is broader than just a comparison of human versus algorithmic performance on a task; it also involves the decision of which instances of the task to give to the algorithm in the first place, and a general framework is developed that poses this latter decision as an optimization problem.

### Active Learning in Approximately Linear Regression Based on Conditional Expectation of Generalization Error

- Computer ScienceJ. Mach. Learn. Res.
- 2006

This paper proposes a new active learning method also using the weighted least-squares learning, which it proves that the proposed active learning criterion is a more accurate predictor of the single-trial generalization error than the existing criterion.

### Bias-Reduced Uncertainty Estimation for Deep Neural Classifiers

- Computer ScienceICLR
- 2019

An uncertainty estimation algorithm is developed that selectively estimates the uncertainty of highly confident points, using earlier snapshots of the trained model, before their estimates are jittered (and way before they are ready for actual classification).

### Consistent algorithms for multiclass classification with an abstain option

- Computer Science
- 2018

The goal is to design consistent algorithms for such n-class classification problems with a ‘reject option’; while such algorithms are known for the binary (n = 2) case, little has been understood for the general multiclass case.

### Consistent Robust Regression

- Computer Science, MathematicsNIPS
- 2017

It is shown that CRR not only offers consistent estimates, but is empirically far superior to several other recently proposed algorithms for the robust regression problem, including extended Lasso and the TORRENT algorithm.

### Deep Residual Learning for Image Recognition

- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

### Active Learning with Statistical Models

- Computer ScienceNIPS
- 1994

This work shows how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression.

### Active Regression by Stratification

- Computer ScienceNIPS
- 2014

This is the first active learner for this setting that provably can improve over passive learning and provides finite sample convergence guarantees for general distributions in the misspecified model.

### Submodular Observation Selection and Information Gathering for Quadratic Models

- Computer ScienceICML
- 2019

An efficient greedy observation selection algorithm uniquely tailored for quadratic models is developed, theoretical bounds on its achievable utility are provided, and monotone and (weak) submodular set functions are shown.

### High-performance medicine: the convergence of human and artificial intelligence

- Medicine, Computer ScienceNature Medicine
- 2019

Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient–doctor relationship or facilitate its erosion remains to be seen.