A literature survey of low‐rank tensor approximation techniques

@article{Grasedyck2013ALS,
  title={A literature survey of low‐rank tensor approximation techniques},
  author={Lars Grasedyck and Daniel Kressner and Christine Tobler},
  journal={GAMM‐Mitteilungen},
  year={2013},
  volume={36}
}
During the last years, low‐rank tensor approximation has been established as a new tool in scientific computing to address large‐scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function‐related tensors. (© 2013 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim) 

A Review on Adaptive Low-Rank Approximation Techniques in the Hierarchical Tensor Format

This review discusses several strategies for an adaptive approximation of tensors in the hierarchical format by black box type techniques, including problems of tensor reconstruction and tensor completion.

Low-rank tensor completion by Riemannian optimization

A new algorithm is proposed that performs Riemannian optimization techniques on the manifold of tensors of fixed multilinear rank with particular attention to efficient implementation, which scales linearly in the size of the tensor.

Low‐rank approximation‐based tensor decomposition model for subspace clustering

This Letter proposes a low-rank approximation-based tensor decomposition (LRATD) algorithm for subspace clustering and develops an accelerated proximal gradient algorithm to solve the problem of LRATD.

A Randomized Tensor Train Singular Value Decomposition

This work presents and analyzes a randomized algorithm for the calculation of the hierarchical SVD (HSVD) for the tensor train (TT) format and examines generalizations of randomized matrix decomposition methods to higher-order tensors in the framework ofThe hierarchical tensor representation.

Fast Multidimensional Convolution in Low-Rank Tensor Formats via Cross Approximation

A new cross-conv algorithm for approximate computation of convolution in different low-rank tensor formats (tensor train, Tucker, hierarchical Tucker), based on applying cross approximation in the “frequency domain,” where convolution becomes a simple elementwise product.

Reduced Basis Methods: From Low-Rank Matrices to Low-Rank Tensors

A novel combination of the reduced basis method with low-rank tensor techniques for the efficient solution of parameter-dependent linear systems in the case of several parameters, which becomes a cheap online task, without requiring the solution of a linear system.

Numerical methods in higher dimensions using tensor factorizations

Numerical methods in higher dimensions using tensor factorizations, which gives a unified view on different algorithms for the solution of seemingly diverse and unconnected problems.

Fast multidimensional convolution in low-rank formats via cross approximation

A new cross-conv algorithm for approximate computation of convolution in different low-rank tensor formats (tensor train, Tucker, Hierarchical Tucker), based on applying cross approximation in the "frequency domain", where convolution becomes a simple elementwise product.

A simpler approach to low-rank tensor canonical polyadic decomposition

  • Daniel L. Pimentel-Alarcón
  • Computer Science
    2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2016
This paper presents a simple and efficient method to compute the canonical polyadic decomposition (CPD) of generic low-rank tensors using elementary linear algebra and complements the theoretical analysis with experiments that support the findings.

Invited session proposal “ Low-rank approximation ”

The proposed invited session is focused on theoretical and algorithmic aspects of matrix and tensor low-rank approximations, with applications in system theory and multidimensional signal processing.
...

References

SHOWING 1-10 OF 313 REFERENCES

Constructive Representation of Functions in Low-Rank Tensor Formats

In this paper, we obtain explicit representations of several multivariate functions in the Tensor Train (TT) format and explicit TT-representations of tensors that stem from the tensorization of

On the best low multilinear rank approximation of higher-order tensors

This paper deals with the best low multilinear rank approximation of higher-order tensors, used as a tool for dimensionality reduction and signal subspace estimation.

Conjugate gradient algorithms for best rank‐1 approximation of tensors

Numerical simulations support that the proposed method offers an alternative to the higher‐order power method for computing the best rank‐1 approximation to a tensor.

A New Scheme for the Tensor Representation

A truncation algorithm can be implemented which is based on the standard matrix singular value decomposition (SVD) method and is possible to apply standard Linear Algebra tools for performing arithmetical operations and for the computation of data-sparse approximations.

A regularized Newton method for the efficient approximation of tensors represented in the canonical tensor format

A rank approximation algorithm for tensors represented in the canonical format in arbitrary pre-Hilbert tensor product spaces is considered and it is shown that the original approximation problem is equivalent to a finite dimensional ℓ2 minimization problem.

Existence and Computation of Low Kronecker-Rank Approximations for Large Linear Systems of Tensor Product Structure

An approximation to the solution x of a linear system of equations Ax=b of tensor product structure as it typically arises for finite element and finite difference discretisations of partial differential operators on tensor grids is constructed.

Structured Rank-(r1, . . . , rd) Decomposition of Function-related Tensors in R_D

Abstract The structured tensor-product approximation of multidimensional nonlocal operators by a two-level rank-(r1, . . . , rd) decomposition of related higher-order tensors is proposed and

Musings on multilinear fitting

Differential-geometric Newton method for the best rank-(R1, R2, R3) approximation of tensors

A differential-geometric Newton method for computing the best rank-(R1, R2, R3) approximation of a third-order tensor and the generalization to tensors of order higher than three is straightforward.
...