tntorch: Tensor Network Learning with PyTorch

@article{Usvyatsov2022tntorchTN,
  title={tntorch: Tensor Network Learning with PyTorch},
  author={Mikhail (Misha) Usvyatsov and Rafael Ballester-Ripoll and Konrad Schindler},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.11128}
}
We present tntorch , a tensor learning framework that supports multiple decompositions (including C ANDECOMP /P ARAFAC , Tucker, and Tensor Train) under a unified interface. With our library, the user can learn and handle low-rank tensors with automatic differentiation, seamless GPU support, and the convenience of PyTorch’s API. Besides decomposition algorithms, tntorch implements differentiable tensor algebra, rank truncation, cross-approximation, batch processing, comprehensive tensor… 

Figures and Tables from this paper

TedNet: A Pytorch Toolkit for Tensor Decomposition Networks

Tensor Train decomposition on TensorFlow (T3F)

TLDR
A library that aims to fix Tensor Train decomposition and makes machine learning papers that rely on Tensor train decomposition easier to implement and includes 92% test coverage, examples, and API reference documentation.

T4DT: Tensorizing Time for Learning Temporal 3D Visual Data

function and applies tensor rank truncation to condense all frames into a single, compressed tensor that represents the entire 4D scene. We show that low-rank tensor compression is extremely compact

Performance of low-rank approximations in tensor train format (TT-SVD) for large dense tensors

TLDR
A ‘tensor-train singular value decomposition’ (TT-SVD) algorithm based on two building blocks: a ‘Q-less tall-skinny QR’ factorization, and a fused tall- Skinny matrixmatrix multiplication and reshape operation is proposed.

Performance of the low-rank tensor-train SVD (TT-SVD) for large dense tensors on modern multi-core CPUs

TLDR
A ‘tensor-train singular value decomposition’ (TT-SVD) algorithm based on two building blocks: a ‘Q-less tall-skinny QR’ factorization, and a fused tall- Skinny matrix-matrix multiplication and reshape operation is proposed.

Hardware-Enabled Efficient Data Processing With Tensor-Train Decomposition

  • Zheng QuBangyan Wang Yuan Xie
  • Computer Science
    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
  • 2022
TLDR
This article proposes an algorithm-hardware co-design with customized architecture, namely, TTD Engine to accelerate TTD and presents a case study demonstrating the benefit of TT-format data processing and the efficacy of using TTD engine.

Tensor Methods in Computer Vision and Deep Learning

TLDR
This article provides an in-depth and practical review of tensors and tensor methods in the context of representation learning and deep learning, with a particular focus on visual data analysis and computer vision applications.

Tensor-based Emotion Editing in the StyleGAN Latent Space

TLDR
It is concluded that the tensor-based model is well suited for emotion and yaw editing, i.e., that the emotion or yaw rotation of a novel face image can be robustly changed without a significant effect on identity or other attributes in the images.

References

SHOWING 1-10 OF 30 REFERENCES

TedNet: A Pytorch Toolkit for Tensor Decomposition Networks

Tensor Train decomposition on TensorFlow (T3F)

TLDR
A library that aims to fix Tensor Train decomposition and makes machine learning papers that rely on Tensor train decomposition easier to implement and includes 92% test coverage, examples, and API reference documentation.

Kronecker CP Decomposition with Fast Multiplication for Compressing RNNs

TLDR
It can be verified that the proposed KCP-RNNs have comparable performance of accuracy with those in other tensor-decomposed formats, and even 278,219x compression ratio could be obtained by the low rank KCP.

Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data via Differentiable Cross-Approximation

TLDR
An end-to-end trainable framework that processes large-scale visual data tensors by looking at a fraction of their entries only and combining a neural network encoder with a tensor train decomposition to learn a low-rank latent encoding and cross-approximation to learn the representation through a subset of the original samples.

Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition

TLDR
The key idea of theTT-GP is to use Tensor Train decomposition for variational parameters, which allows to train GPs with billions of inducing inputs and achieve state-of-the-art results on several benchmarks.

TTOpt: A Maximum Volume Quantized Tensor Train-based Optimization and its Application to Reinforcement Learning

TLDR
This work presents a novel procedure for optimization based on the combination of quantized tensor train representation and a generalized maximum matrix volume principle, which compares favorably to popular evolutionary-based methods and outperforms them by the number of function evaluations or execution time.

MUSCO: Multi-Stage Compression of neural networks

TLDR
A new simple and efficient iterative approach, which alternates low-rank factorization with a smart rank selection and fine-tuning, which improves the compression rate while maintaining the accuracy for a variety of tasks.

Tensor Dropout for Robust Learning

TLDR
Tensor dropout is proposed, a randomization technique that can be applied to tensor factorizations, such as those parametrizing tensor layers, that improves generalization for image classification on ImageNet and CIFAR-100 and establishes state-of-the-art accuracy for phenotypic trait prediction on the largest available dataset of brain MRI (U.K. Biobank), where multi-linear structure is paramount.

Tensorized Embedding Layers for Efficient Model Compression

TLDR
This work introduces a novel way of parametrizing embedding layers based on the Tensor Train (TT) decomposition, which allows compressing the model significantly at the cost of a negligible drop or even a slight gain in performance.