Corpus ID: 211133001

Self-explaining AI as an alternative to interpretable AI

@inproceedings{Elton2020SelfexplainingAA,
  title={Self-explaining AI as an alternative to interpretable AI},
  author={Daniel C. Elton},
  year={2020}
}
  • Daniel C. Elton
  • Published 2020
  • Computer Science, Mathematics
  • The ability to explain decisions made by AI systems is highly sought after, especially in domains where human lives are at stake such as medicine or autonomous vehicles. While it is always possible to approximate the input-output relations of deep neural networks with human-understandable rules or a post-hoc model, the discovery of the double descent phenomena suggests that no such approximation will ever map onto the actual mechanistic functioning of deep neural networks. Double descent… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 45 REFERENCES

    A deep learning framework for neuroscience

    VIEW 1 EXCERPT

    A jamming transition from under- to over-parametrization affects generalization in deep learning

    • S. Spigler, M. Geiger, +3 authors M. Wyart
    • Journal of Physics A: Mathematical and Theoretical 52(47), 474001
    • 2019
    VIEW 2 EXCERPTS

    Complexity of Linear Regions in Deep Networks

    VIEW 1 EXCERPT

    Deep Double Descent: Where Bigger Models and More Data Hurt

    VIEW 1 EXCERPT

    Deep ReLU Networks Have Surprisingly Few Activation Patterns

    VIEW 1 EXCERPT

    Definitions, methods, and applications in interpretable machine learning.

    VIEW 2 EXCERPTS

    Encoding Visual Attributes in Capsules for Explainable Medical Diagnoses

    • R. LaLonde, D. Torigian, U. Bagci
    • arXiv e-prints: 1909.05926
    • 2019
    VIEW 3 EXCERPTS