Corpus ID: 32093

An unexpected unity among methods for interpreting model predictions

@article{Lundberg2016AnUU,
  title={An unexpected unity among methods for interpreting model predictions},
  author={Scott Lundberg and Su-In Lee},
  journal={ArXiv},
  year={2016},
  volume={abs/1611.07478}
}
  • Scott Lundberg, Su-In Lee
  • Published in ArXiv 2016
  • Computer Science
  • Understanding why a model made a certain prediction is crucial in many data science fields. Interpretable predictions engender appropriate trust and provide insight into how the model may be improved. However, with large modern datasets the best accuracy is often achieved by complex models even experts struggle to interpret, which creates a tension between accuracy and interpretability. Recently, several methods have been proposed for interpreting predictions from complex models by estimating… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    Figures and Topics from this paper.

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 43 CITATIONS

    "Why Should I Trust Interactive Learners?" Explaining Interactive Queries of Classifiers to Users

    VIEW 4 EXCERPTS
    CITES BACKGROUND & METHODS
    HIGHLY INFLUENCED

    Fuzzy logic interpretation of quadratic networks

    A Stratification Approach to Partial Dependence for Codependent Variables

    VIEW 1 EXCERPT
    CITES METHODS

    Adversarial Examples for Graph Data: Deep Insights into Attack and Defense

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Explanatory Interactive Machine Learning

    VIEW 3 EXCERPTS
    CITES BACKGROUND