Corpus ID: 67749683

Explaining a black-box using Deep Variational Information Bottleneck Approach

@article{Bang2019ExplainingAB,
  title={Explaining a black-box using Deep Variational Information Bottleneck Approach},
  author={Seo-Jin Bang and Pengtao Xie and Wei Wu and Eric P. Xing},
  journal={ArXiv},
  year={2019},
  volume={abs/1902.06918}
}
  • Seo-Jin Bang, Pengtao Xie, +1 author Eric P. Xing
  • Published in ArXiv 2019
  • Mathematics, Computer Science
  • Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    2
    Twitter Mentions

    Figures, Tables, and Topics from this paper.

    Citations

    Publications citing this paper.
    SHOWING 1-3 OF 3 CITATIONS

    Neural Image Compression and Explanation

    VIEW 9 EXCERPTS
    CITES BACKGROUND
    HIGHLY INFLUENCED

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 25 REFERENCES

    Learning Word Vectors for Sentiment Analysis

    VIEW 7 EXCERPTS
    HIGHLY INFLUENTIAL

    SmoothGrad: removing noise by adding noise

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    The information bottleneck method

    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL

    Adam: A Method for Stochastic Optimization

    VIEW 1 EXCERPT
    HIGHLY INFLUENTIAL