Reframing Neural Networks: Deep Structure in Overcomplete Representations

@article{Murdock2022ReframingNN,
  title={Reframing Neural Networks: Deep Structure in Overcomplete Representations},
  author={Calvin Murdock and Simon Lucey},
  journal={IEEE transactions on pattern analysis and machine intelligence},
  year={2022},
  volume={PP}
}
  • Calvin Murdock, S. Lucey
  • Published 10 March 2021
  • Computer Science
  • IEEE transactions on pattern analysis and machine intelligence
In comparison to classical shallow representation learning techniques, deep neural networks have achieved superior performance in nearly every application benchmark. But despite their clear empirical advantages, it is still not well understood what makes them so effective. To approach this question, we introduce deep frame approximation: a unifying framework for constrained representation learning with structured overcomplete frames. While exact inference requires iterative optimization, it may… 
Structural Extensions of Basis Pursuit: Guarantees on Adversarial Robustness
TLDR
It is proved that the stability theorem of BP holds upon the following generalizations: the regularization procedure can be separated into disjoint groups with different weights, neurons or full layers may form groups, and the regularizer takes various generalized forms of the (cid:96) 1 norm.

References

SHOWING 1-10 OF 71 REFERENCES
Dataless Model Selection With the Deep Frame Potential
  • Calvin MurdockS. Lucey
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
The deep frame potential is proposed: a measure of coherence that is approximately related to representation stability but has minimizers that depend only on network structure.
Deep Component Analysis via Alternating Direction Neural Networks
TLDR
Deep Component Analysis is introduced, an expressive multilayer model formulation that enforces hierarchical structure through constraints on latent variables in each layer that enables parameter learning using standard backpropagation and provides both a novel theoretical perspective for understanding networks and a practical technique for constraining predictions with prior knowledge.
Architectural Adversarial Robustness: The Case for Deep Pursuit
TLDR
A new method of deep pursuit approximates the activations of all layers as a single global optimization problem, allowing us to consider deeper, real-world architectures with skip connections such as residual networks.
A Closer Look at Memorization in Deep Networks
TLDR
The analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization.
Opening the Black Box of Deep Neural Networks via Information
TLDR
This work demonstrates the effectiveness of the Information-Plane visualization of DNNs and shows that the training time is dramatically reduced when adding more hidden layers, and the main advantage of the hidden layers is computational.
On the Information Bottleneck Theory of Deep Learning
TLDR
This work studies the information bottleneck (IB) theory of deep learning, and finds that there is no evident causal connection between compression and generalization: networks that do not compress are still capable of generalization, and vice versa.
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations
TLDR
The convolutional deep belief network is presented, a hierarchical generative model which scales to realistic image sizes and is translation-invariant and supports efficient bottom-up and top-down probabilistic inference.
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
On Multi-Layer Basis Pursuit, Efficient Algorithms and Convolutional Neural Networks
TLDR
The traditional Basis Pursuit problem is generalized to a multi-layer setting, introducing similar sparse enforcing penalties at different representation layers in a symbiotic relation between synthesis and analysis sparse priors, providing a principled way to construct deep recurrent CNNs.
Sensitivity and Generalization in Neural Networks: an Empirical Study
TLDR
It is found that trained neural networks are more robust to input perturbations in the vicinity of the training data manifold, as measured by the norm of the input-output Jacobian of the network, and that it correlates well with generalization.
...
...