Learning Representations by Humans, for Humans

@article{Hilgard2019LearningRB,
  title={Learning Representations by Humans, for Humans},
  author={Sophie Hilgard and N. Rosenfeld and Mahzarin R. Banaji and Jack Cao and D. Parkes},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.12686}
}
  • Sophie Hilgard, N. Rosenfeld, +2 authors D. Parkes
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • We propose a new, complementary approach to interpretability, in which machines are not considered as experts whose role it is to suggest what should be done and why, but rather as advisers. The objective of these models is to communicate to a human decision-maker not what to decide but how to decide. In this way, we propose that machine learning pipelines will be more readily adopted, since they allow a decision-maker to retain agency. Specifically, we develop a framework for learning… CONTINUE READING
    4 Citations
    Learning to Complement Humans
    • 5
    • PDF

    References

    SHOWING 1-10 OF 67 REFERENCES
    Interpretable Decision Sets: A Joint Framework for Description and Prediction
    • 311
    • PDF
    Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
    • 176
    • PDF
    Towards A Rigorous Science of Interpretable Machine Learning
    • 1,021
    • PDF
    "Why Should I Trust You?": Explaining the Predictions of Any Classifier
    • 3,504
    • Highly Influential
    • PDF
    Human-in-the-Loop Interpretability Prior
    • 46
    • PDF
    Rationalizing Neural Predictions
    • 363
    • PDF
    Representation Learning: A Review and New Perspectives
    • 6,509
    • PDF
    Active Learning Literature Survey
    • 3,612
    • PDF
    Human Decisions and Machine Predictions
    • 282
    • PDF
    Manipulating and Measuring Model Interpretability
    • 142
    • PDF