Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic Topology

@article{Rieck2019NeuralPA,
  title={Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic Topology},
  author={Bastian Alexander Rieck and Matteo Togninalli and Christian Bock and Michael Moor and Max Horn and Thomas Gumbsch and Karsten M. Borgwardt},
  journal={ArXiv},
  year={2019},
  volume={abs/1812.09764}
}
  • Bastian Alexander Rieck, Matteo Togninalli, +4 authors Karsten M. Borgwardt
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • While many approaches to make neural networks more fathomable have been proposed, they are restricted to interrogating the network with input data. Measures for characterizing and monitoring structural properties, however, have not been developed. In this work, we propose neural persistence, a complexity measure for neural network architectures based on topological data analysis on weighted stratified graphs. To demonstrate the usefulness of our approach, we show that neural persistence… CONTINUE READING

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 17 CITATIONS

    Topologically Densified Distributions

    VIEW 2 EXCERPTS
    CITES BACKGROUND

    A Topology Layer for Machine Learning

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Characterizing the Shape of Activation Space in Deep Neural Networks

    VIEW 1 EXCERPT
    CITES BACKGROUND

    PI-Net: A Deep Learning Approach to Extract Topological Persistence Images

    VIEW 2 EXCERPTS
    CITES METHODS & BACKGROUND

    Topology of deep neural networks

    VIEW 1 EXCERPT
    CITES METHODS

    Path Homologies of Deep Feedforward Networks

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Topological Autoencoders

    VIEW 2 EXCERPTS
    CITES BACKGROUND

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 46 REFERENCES

    Practical Recommendations for Gradient-Based Training of Deep Architectures

    • Yoshua Bengio
    • Computer Science, Mathematics
    • Neural Networks: Tricks of the Trade
    • 2012
    VIEW 3 EXCERPTS

    On the Information Bottleneck Theory of Deep Learning

    VIEW 2 EXCERPTS

    Deep learning and the information bottleneck principle

    VIEW 1 EXCERPT

    On the Complexity of Neural Network Classifiers: A Comparison Between Shallow and Deep Architectures

    VIEW 1 EXCERPT

    Deep Learning with Topological Signatures

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL