• Corpus ID: 218869701

Geometric algorithms for predicting resilience and recovering damage in neural networks

@article{Raghavan2020GeometricAF,
  title={Geometric algorithms for predicting resilience and recovering damage in neural networks},
  author={Guruprasad Raghavan and Jiayi Li and Matt Thomson},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.11603}
}
Biological neural networks have evolved to maintain performance despite significant circuit damage. To survive damage, biological network architectures have both intrinsic resilience to component loss and also activate recovery programs that adjust network weights through plasticity to stabilize performance. Despite the importance of resilience in technology applications, the resilience of artificial neural networks is poorly understood, and autonomous recovery algorithms have yet to be… 

Figures from this paper

References

SHOWING 1-10 OF 35 REFERENCES

On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations

TLDR
It is shown that convolutional networks are surprisingly robust to a number of internal perturbations in the higher convolutionan layers but the bottom convolutionAL layers are much more fragile.

Understanding error propagation in deep learning neural network (DNN) accelerators and applications

TLDR
It is found that the error resilience of a DNN system depends on the data types, values, data reuses, and types of layers in the design, and two efficient protection techniques are proposed.

The Robustness of Modern Deep Learning Architectures against Single Event Upset Errors

TLDR
This paper tests several modern neural network architectures for their robustness to bit flips in their weights and examines which aspects of each different architecture lead to greater robustness.

Learning both Weights and Connections for Efficient Neural Network

TLDR
A method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections, and prunes redundant connections using a three-step method.

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

TLDR
This paper standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications, and proposes a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.

Robustness and fault tolerance make brains harder to study

TLDR
It is found that Granger Causality analysis, an important method used to infer circuit connections from the behavior of neurons within the circuit, is defeated by the mechanisms that give rise to this robustness and fault tolerance.

A critique of pure learning and what artificial neural networks can learn from animal brains

  • A. Zador
  • Biology, Computer Science
    Nature Communications
  • 2019
TLDR
It is suggested that for AI to learn from animal brains, it is important to consider that animal behaviour results from brain connectivity specified in the genome through evolution, and not due to unique learning algorithms.

On the importance of single directions for generalization

TLDR
It is found that class selectivity is a poor predictor of task importance, suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity, but also that individually selective units may not be necessary for strong network performance.

Persistence of neuronal representations through time and damage in the hippocampus

TLDR
These findings indicate the presence of attractor-like ensemble dynamics as a mechanism by which the representations of an environment are encoded in the brain by groups of neurons with synchronous activity.