• Corpus ID: 227275233

On 1/n neural representation and robustness

@article{Nassar2020On1N,
  title={On 1/n neural representation and robustness},
  author={Josue Nassar and Piotr A. Sok{\'o}l and SueYeon Chung and Kenneth D. Harris and Il Memming Park},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.04729}
}
Understanding the nature of representation in neural networks is a goal shared by neuroscience and machine learning. It is therefore exciting that both fields converge not only on shared questions but also on similar approaches. A pressing question in these areas is understanding how the structure of the representation used by neural networks affects both their generalization, and robustness to perturbations. In this work, we investigate the latter by juxtaposing experimental results regarding… 
Increasing neural network robustness improves match to macaque V1 eigenspectrum, spatial frequency preference and predictivity
TLDR
It is found that convolutional neural networks had a preference for high spatial frequency image features, unlike primary visual cortex (V1) cells, which suggests that the dependence on high-frequency image features for image classification may be related to the image perturbations affecting models but not humans.
Optimal Input Representation in Neural Systems at the Edge of Chaos
TLDR
It is concluded that operating near criticality can also have —besides the usually alleged virtues— the advantage of allowing for flexible, robust and efficient input representations.
Population Codes Enable Learning from Few Examples By Shaping Inductive Bias
TLDR
This study considers biologically-plausible reading out of arbitrary stimulus-response maps from arbitrary population codes, and develops an analytical theory that predicts the generalization error of the readout as a function of the number of examples, suggesting sample-efficient learning as a general normative coding principle.

References

SHOWING 1-10 OF 48 REFERENCES
Towards the first adversarially robust neural network model on MNIST
TLDR
A novel robust classification model that performs analysis by synthesis using learned class-conditional data distributions is presented and it is demonstrated that most adversarial examples are strongly perturbed towards the perceptual boundary between the original and the adversarial class.
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
TLDR
It is demonstrated that regularizing input gradients makes them more naturally interpretable as rationales for model predictions, and also exhibits robustness to transferred adversarial examples generated to fool all of the other models.
Intriguing properties of neural networks
TLDR
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
TLDR
It is proved that overparameterized neural networks can learn some notable concept classes, including two and three-layer networks with fewer parameters and smooth activations, and SGD (stochastic gradient descent) or its variants in polynomial time using polynomially many samples.
Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization
TLDR
This work suggests a theoretically inspired novel approach to improve the networks' robustness using the Frobenius norm of the Jacobian of the network, which is applied as post-processing, after regular training has finished and demonstrates empirically that it leads to enhanced robustness results with a minimal change in the original network's accuracy.
Intrinsic dimension of data representations in deep neural networks
TLDR
The intrinsic dimensionality of data-representations is studied, i.e. the minimal number of parameters needed to describe a representation, and it is found that, in a trained network, the ID is orders of magnitude smaller than the number of units in each layer.
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
On Lazy Training in Differentiable Programming
TLDR
This work shows that this "lazy training" phenomenon is not specific to over-parameterized neural networks, and is due to a choice of scaling that makes the model behave as its linearization around the initialization, thus yielding a model equivalent to learning with positive-definite kernels.
On the Information Bottleneck Theory of Deep Learning
TLDR
This paper presents a comprehensive theory of large scale learning with Deep Neural Networks (DNN), when optimized with Stochastic Gradient Decent (SGD), built on three theoretical components.
...
1
2
3
4
5
...