# Tensor Networks for Latent Variable Analysis. Part I: Algorithms for Tensor Train Decomposition

@article{Phan2016TensorNF, title={Tensor Networks for Latent Variable Analysis. Part I: Algorithms for Tensor Train Decomposition}, author={A. Phan and Andrzej Cichocki and Andr{\'e} Uschmajew and Petr Tichavsk{\'y} and Gheorghe Luta and Danilo P. Mandic}, journal={ArXiv}, year={2016}, volume={abs/1609.09230} }

Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model which represents data as an ordered network of sub-tensors of order-2 or order-3 has, so far, not been widely considered in these fields, although this so-called tensor network decomposition has been long studied in quantum physics and scientific computing. In this study, we present novel algorithms and…

## Figures and Tables from this paper

## 13 Citations

Tensor Networks for Latent Variable Analysis: Novel Algorithms for Tensor Train Approximation

- Computer ScienceIEEE Transactions on Neural Networks and Learning Systems
- 2020

The proposed algorithms provide well-balanced TT-decompositions and are tested in the classic paradigms of blind source separation from a single mixture, denoising, and feature extraction, achieving superior performance over the widely used truncated algorithms for TT decomposition.

Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 2 Applications and Future Perspectives

- Computer ScienceFound. Trends Mach. Learn.
- 2017

This monograph builds on Tensor Networks for Dimensionality Reduction and Large-scale Optimization by discussing tensor network models for super-compressed higher-order representation of data/parameters and cost functions, together with an outline of their applications in machine learning and data analytics.

Tensor Networks for Latent Variable Analysis: Higher Order Canonical Polyadic Decomposition

- Computer ScienceIEEE Transactions on Neural Networks and Learning Systems
- 2020

A novel method for CPD of higher order tensors, which rests upon a simple tensor network of representative inter-connected core tensors of orders not higher than 3.

A Survey on Tensor Techniques and Applications in Machine Learning

- Computer ScienceIEEE Access
- 2019

This survey introduces the basic knowledge of Tensor, including tensor operations, tensor decomposition, some tensor-based algorithms, and some applications of tensor in machine learning and deep learning for those who are interested in learning tensors.

Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions

- Computer ScienceFound. Trends Mach. Learn.
- 2016

A focus is on the Tucker and tensor train TT decompositions and their extensions, and on demonstrating the ability of tensor network to provide linearly or even super-linearly e.g., logarithmically scalablesolutions, as illustrated in detail in Part 2 of this monograph.

Tensor Train Factorization and Completion under Noisy Data with Prior Analysis and Rank Estimation

- Computer Science
- 2020

A fully Bayesian treatment of TT decomposition is employed to avoid noise over-tuning, by endowing it with the ability of automatic rank determination and based on the proposed probabilistic model, an e-cient learning algorithm is derived under the variational inference frame-work.

Nonnegatively Constrained Tensor Network for Classification Problems

- Computer Science2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN)
- 2019

A new computational algorithm is proposed for extracting low-rank and nonnegative 2D features, and it is demonstrated that this approach outperforms the fundamental and state-of-the art methods for dimensionality reduction and classification problems.

Learning Tensor Train Representation with Automatic Rank Determination from Incomplete Noisy Data

- Computer Science
- 2020

A fully Bayesian treatment of TT decomposition is employed to enable automatic rank determination, and theoretical evidence is established for adopting a Gaussian-product-Gamma prior to induce sparsity on the slices of the TT cores, so that the model complexity is automatically determined even under incomplete and noisy observed data.

Multidimensional harmonic retrieval based on Vandermonde tensor train

- Computer ScienceSignal Process.
- 2019

Error Preserving Correction: A Method for CP Decomposition at a Target Error Bound

- Computer ScienceIEEE Transactions on Signal Processing
- 2019

The aim is to seek an alternative tensor, which preserves the approximation error, but norms of rank-1 tensor components of the new tensor are minimized, which can be useful for decomposing tensors that cannot be performed by traditional algorithms.

## References

SHOWING 1-10 OF 36 REFERENCES

Tensor completion in hierarchical tensor representations

- Computer ScienceArXiv
- 2014

This book chapter considers versions of iterative hard thresholding schemes adapted to hierarchical tensor formats and provides first partial convergence results based on a tensor version of the restricted isometry property (TRIP) of the measurement map.

Tensor Decompositions and Applications

- Computer ScienceSIAM Rev.
- 2009

This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order…

The Alternating Linear Scheme for Tensor Optimization in the Tensor Train Format

- Computer ScienceSIAM J. Sci. Comput.
- 2012

This article shows how optimization tasks can be treated in the TT format by a generalization of the well-known alternating least squares (ALS) algorithm and by a modified approach (MALS) that enables dynamical rank adaptation.

Nonnegative Matrix and Tensor Factorizations - Applications to Exploratory Multi-way Data Analysis and Blind Source Separation

- Computer Science
- 2009

This book provides a broad survey of models and efficient algorithms for Nonnegative Matrix Factorization (NMF). This includes NMFs various extensions and modifications, especially Nonnegative Tensor…

Optimization on the Hierarchical Tucker manifold – Applications to tensor completion

- Computer Science
- 2014

Tensor Deflation for CANDECOMP/PARAFAC— Part I: Alternating Subspace Update Algorithm

- Computer ScienceIEEE Transactions on Signal Processing
- 2015

A novel deflation method for the CP decomposition of order-3 tensors of size R×R×R and rank- R which has a computational cost of O(R3) per iteration which is lower than the cost of the ALS algorithm for the overallCP decomposition.

Tensor Decompositions for Signal Processing Applications: From two-way to multiway component analysis

- Computer ScienceIEEE Signal Processing Magazine
- 2015

Benefiting from the power of multilinear algebra as their mathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints which match data properties and extract more general latent components in the data than matrix-based methods.

Overview of constrained PARAFAC models

- Computer ScienceEURASIP J. Adv. Signal Process.
- 2014

In this paper, we present an overview of constrained parallel factor (PARAFAC) models where the constraints model linear dependencies among columns of the factor matrices of the tensor decomposition…

Low-rank tensor completion by Riemannian optimization

- Computer Science
- 2014

A new algorithm is proposed that performs Riemannian optimization techniques on the manifold of tensors of fixed multilinear rank with particular attention to efficient implementation, which scales linearly in the size of the tensor.

Low-Rank Tensor Methods with Subspace Correction for Symmetric Eigenvalue Problems

- Computer ScienceSIAM J. Sci. Comput.
- 2014

This work considers the solution of large-scale symmetric eigenvalue problems for which it is known that the eigenvectors admit a low-rank tensor approximation from the discretization of high-dimensional elliptic PDE eigen value problems or in strongly correlated spin systems.