Corpus ID: 232147058

Spectral Tensor Train Parameterization of Deep Learning Layers

@article{Obukhov2021SpectralTT,
  title={Spectral Tensor Train Parameterization of Deep Learning Layers},
  author={Anton Obukhov and Maxim V. Rakhuba and Alexander Liniger and Zhiwu Huang and Stamatios Georgoulis and Dengxin Dai and Luc Van Gool},
  journal={ArXiv},
  year={2021},
  volume={abs/2103.04217}
}
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context. The low-rank property leads to parameter efficiency and permits taking computational shortcuts when computing mappings. Spectral properties are often subject to constraints in optimization problems, leading to better models and stability of optimization. We start by looking at the compact SVD parameterization of weight matrices and identifying redundancy sources in the… Expand

Figures and Tables from this paper

On the Practicality of Deterministic Epistemic Uncertainty
TLDR
It is found that, while DUMs scale to realistic vision tasks and perform well on OOD detection, the practicality of current methods is undermined by poor calibration under realistic distributional shifts. Expand
Randomized algorithms for rounding in the Tensor-Train format
TLDR
Several randomized algorithms are proposed that are generalizations of randomized low-rank matrix approximation algorithms and provide significant reduction in computation compared to deterministic TT-rounding algorithms for the adaptation of GMRES to vectors in TT format. Expand
T-Basis: a Compact Representation for Neural Networks
TLDR
It is concluded that T-Basis networks are equally well suited for training and inference in resource-constrained environments and usage on the edge devices. Expand

References

SHOWING 1-10 OF 51 REFERENCES
Stable Low-rank Tensor Decomposition for Compression of Convolutional Neural Network
TLDR
This paper presents a novel method, which can stabilize the low-rank approximation of convolutional kernels and ensure efficient compression while preserving the high-quality performance of the neural networks. Expand
Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization
TLDR
An efficient parametrization of the transition matrix of an RNN that allows us to stabilize the gradients that arise in its training and empirically solves the vanishing gradient issue to a large extent. Expand
Ultimate tensorization: compressing convolutional and FC layers alike
TLDR
This paper combines the proposed approach with the previous work to compress both convolutional and fully-connected layers of a network and achieve 80x network compression rate with 1.1% accuracy drop on the CIFAR-10 dataset. Expand
Tensorizing Neural Networks
TLDR
This paper converts the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. Expand
Spectral Tensor-Train Decomposition
TLDR
A new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition is proposed, which combines the favorable dimension-scaling of the TT decomposition with the spectral convergence rate of polynomial approximations, yielding efficient and accurate approximation schemes. Expand
Wide Compression: Tensor Ring Nets
TLDR
This work introduces Tensor Ring Networks (TR-Nets), which significantly compress both the fully connected layers and the convolutional layers of deep neural networks, and shows promise in scientific computing and deep learning, especially for emerging resource-constrained devices such as smartphones, wearables and IoT devices. Expand
Stable Rank Normalization for Improved Generalization in Neural Networks and GANs
TLDR
Stable rank normalization (SRN) is proposed, a novel, optimal, and computationally efficient weight-normalization scheme which minimizes the stable rank of a linear operator and can be shown to have a unique optimal solution. Expand
Tensor-Train Recurrent Neural Networks for Video Classification
TLDR
A new, more general and efficient approach by factorizing the input-to-hidden weight matrix using Tensor-Train decomposition which is trained simultaneously with the weights themselves which provides a novel and fundamental building block for modeling high-dimensional sequential data with RNN architectures. Expand
Spectral Normalization for Generative Adversarial Networks
TLDR
This paper proposes a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator and confirms that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques. Expand
Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition
TLDR
A simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning is proposed, leading to higher obtained CPU speedups at the cost of lower accuracy drops for the smaller of the two networks. Expand
...
1
2
3
4
5
...