Learning Representations by Humans, for Humans

  title={Learning Representations by Humans, for Humans},
  author={Sophie Hilgard and N. Rosenfeld and Mahzarin R. Banaji and Jack Cao and D. Parkes},
  • Sophie Hilgard, N. Rosenfeld, +2 authors D. Parkes
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • We propose a new, complementary approach to interpretability, in which machines are not considered as experts whose role it is to suggest what should be done and why, but rather as advisers. The objective of these models is to communicate to a human decision-maker not what to decide but how to decide. In this way, we propose that machine learning pipelines will be more readily adopted, since they allow a decision-maker to retain agency. Specifically, we develop a framework for learning… CONTINUE READING
    4 Citations
    Learning to Complement Humans
    • 8
    • PDF


    Manipulating and Measuring Model Interpretability
    • 155
    • PDF
    Interpretable Decision Sets: A Joint Framework for Description and Prediction
    • 335
    • PDF
    Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
    • 186
    • PDF
    Towards A Rigorous Science of Interpretable Machine Learning
    • 1,124
    • PDF
    "Why Should I Trust You?": Explaining the Predictions of Any Classifier
    • 3,866
    • Highly Influential
    • PDF
    Human-in-the-Loop Interpretability Prior
    • 48
    • PDF
    Rationalizing Neural Predictions
    • 384
    • PDF
    Representation Learning: A Review and New Perspectives
    • 6,756
    • PDF
    Active Learning Literature Survey
    • 3,720
    • PDF
    Human Decisions and Machine Predictions
    • 309
    • PDF