Learning Representations by Humans, for Humans
@article{Hilgard2019LearningRB, title={Learning Representations by Humans, for Humans}, author={Sophie Hilgard and N. Rosenfeld and Mahzarin R. Banaji and Jack Cao and D. Parkes}, journal={ArXiv}, year={2019}, volume={abs/1905.12686} }
We propose a new, complementary approach to interpretability, in which machines are not considered as experts whose role it is to suggest what should be done and why, but rather as advisers. The objective of these models is to communicate to a human decision-maker not what to decide but how to decide. In this way, we propose that machine learning pipelines will be more readily adopted, since they allow a decision-maker to retain agency. Specifically, we develop a framework for learning… CONTINUE READING
Figures, Tables, and Topics from this paper
Paper Mentions
4 Citations
A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores
- Computer Science
- CHI
- 2020
- 5
- PDF
References
SHOWING 1-10 OF 67 REFERENCES
Interpretable Decision Sets: A Joint Framework for Description and Prediction
- Mathematics, Computer Science
- KDD
- 2016
- 335
- PDF
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
- Computer Science, Mathematics
- IJCAI
- 2017
- 186
- PDF
Towards A Rigorous Science of Interpretable Machine Learning
- Computer Science, Mathematics
- 2017
- 1,124
- PDF
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
- Computer Science, Mathematics
- HLT-NAACL Demos
- 2016
- 3,866
- Highly Influential
- PDF
Representation Learning: A Review and New Perspectives
- Computer Science, Mathematics
- IEEE Transactions on Pattern Analysis and Machine Intelligence
- 2013
- 6,756
- PDF
Human Decisions and Machine Predictions
- Economics, Medicine
- The quarterly journal of economics
- 2018
- 309
- PDF