• Corpus ID: 219956325

MARS: Masked Automatic Ranks Selection in Tensor Decompositions

@article{Kodryan2020MARSMA,
  title={MARS: Masked Automatic Ranks Selection in Tensor Decompositions},
  author={Maxim Kodryan and Dmitry Kropotov and Dmitry P. Vetrov},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.10859}
}
Tensor decomposition methods have recently proven to be efficient for compressing and accelerating neural networks. However, the problem of optimal decomposition structure determination is still not well studied while being quite important. Specifically, decomposition ranks present the crucial parameter controlling the compression-accuracy trade-off. In this paper, we introduce MARS -- a new efficient method for the automatic selection of ranks in general tensor decompositions. During training… 

Figures and Tables from this paper

Design Automation for Fast, Lightweight, and Effective Deep Learning Models: A Survey

This survey offers comprehensive coverage of studies of design automation techniques for deep learning models targeting edge computing, and offers an overview and comparison of key metrics that are used commonly to quantify the effectiveness, lightness, and computational costs.

References

SHOWING 1-10 OF 57 REFERENCES

Bayesian Tensorized Neural Networks with Automatic Rank Selection

Some mathematical notes on three-mode factor analysis

The model for three-mode factor analysis is discussed in terms of newer applications of mathematical processes including a type of matrix process termed the Kronecker product and the definition of

Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications

A simple and effective scheme to compress the entire CNN, called one-shot whole network compression, which addresses the important implementation level issue on 1?1 convolution, which is a key operation of inception module of GoogLeNet as well as CNNs compressed by the proposed scheme.

Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank

A Sentiment Treebank that includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality, and introduces the Recursive Neural Tensor Network.

Tensorizing neural networks. In Advances in Neural Information Processing Systems

  • Annual Conference on Neural Information Processing Systems
  • 2015

Ultimate tensorization: compressing convolutional and FC layers alike

This paper combines the proposed approach with the previous work to compress both convolutional and fully-connected layers of a network and achieve 80x network compression rate with 1.1% accuracy drop on the CIFAR-10 dataset.

GradientBased Learning Applied to Document Recognition

Various methods applied to handwritten character recognition are reviewed and compared and Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Gradient-based learning applied to document recognition

This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task, and Convolutional neural networks are shown to outperform all other techniques.
...