Completion of High Order Tensor Data with Missing Entries via Tensor-Train Decomposition

  title={Completion of High Order Tensor Data with Missing Entries via Tensor-Train Decomposition},
  author={Longhao Yuan and Qibin Zhao and Jianting Cao},
In this paper, we aim at the completion problem of high order tensor data with missing entries. The existing tensor factorization and completion methods suffer from the curse of dimensionality when the order of tensor N>>3. To overcome this problem, we propose an efficient algorithm called TT-WOPT (Tensor-train Weighted OPTimization) to find the latent core tensors of tensor data and recover the missing entries. Tensor-train decomposition, which has the powerful representation ability with… Expand
High-Order Tensor Completion for Data Recovery via Sparse Tensor-Train Optimization
An algorithm named Sparse Tensor-train Optimization (STTO) is proposed which considers incomplete data as sparse tensor and uses first-order optimization method to find the factors of tensor-train decomposition and employs a ten-sorization method to transform data to a higher-order form. Expand
Higher-dimension Tensor Completion via Low-rank Tensor Ring Decomposition
This paper proposes a new tensor completion approach named tensor ring weighted optimization (TR-WOPT), which finds the latent factors of the incomplete tensor by gradient descent algorithm, then the latent Factors are employed to predict the missing entries of the tensor. Expand
High-order tensor completion via gradient-based optimization under tensor train format
This paper attempts to find the low-rank TT decomposition of the incomplete data which captures the latent features of the whole data and then reconstruct the missing entries and proposes two TT-based algorithms and a method to transform visual data into higher-order tensors, resulting in the performance improvement of algorithms. Expand
High-dimension Tensor Completion via Gradient-based Optimization Under Tensor-train Format
This paper attempts to find the low-rank TT decomposition of the incomplete data which captures the latent features of the whole data and then reconstruct the missing entries and proposes two TT-based algorithms and a method to transform visual data into higher-order tensors, resulting in the performance improvement of these algorithms. Expand
Rank Minimization on Tensor Ring: A New Paradigm in Scalable Tensor Decomposition and Completion
Taking advantages of high compressibility of the recently proposed tensor ring (TR) decomposition, a new model for tensor completion problem is proposed through introducing convex surrogates of tensor low-rank assumption on latent Tensor ring factors, which makes it possible for the Schatten norm regularization based models to be solved at much smaller scale. Expand
Tensor Completion using Balanced Unfolding of Low-Rank Tensor Ring
Tensor completion aims to recover a multi-dimensional array from its incomplete observations. The recently proposed tensor ring (TR) decomposition has powerful representation ability and it showsExpand
Tensor Ring Decomposition with Rank Minimization on Latent Space: An Efficient Approach for Tensor Completion
This paper proposes a novel tensor completion method which is robust to model selection, and introduces nuclear norm regularization on the latent TR factors, resulting in the optimization step using singular value decomposition (SVD) being performed at a much smaller scale. Expand
Provable Model for Tensor Ring Completion
This paper rigorously analyze the sample complexity of TR completion and finds it also possesses the balance characteristic, which is consistent with the result of matrix completion, and proposes a nuclear norm minimization model and solves it by the alternating direction method of multipliers (ADMM). Expand
Tensor Decomposition Via Core Tensor Networks
This paper proposes an efficient TD algorithm that aims to learn a global mapping from input tensors to latent core tensors, under the assumption that the mappings of multiple tensors might be shared or highly correlated. Expand
Tensor Completion with Shift-invariant Cosine Bases
  • Tatsuya Yokota, H. Hontani
  • Computer Science, Mathematics
  • 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)
  • 2018
An improvement method for MDT based tensor completion is proposed by exploiting a common phenomenon that the most real signals are commonly having Fourier bases as shift-invariant features in its auto-correlation matrix by considering the cosine bases in high-order tensor. Expand


Tensor completion and low-n-rank tensor recovery via convex optimization
In this paper we consider sparsity on a tensor level, as given by the n-rank of a tensor. In an important sparse-vector approximation problem (compressed sensing) and the low-rank matrix recoveryExpand
Scalable Tensor Factorizations for Incomplete Data
An algorithm called CP-WOPT (CP Weighted OPTimization) that uses a first-order optimization approach to solve the weighted least squares problem and is shown to successfully factorize tensors with noise and up to 99% missing data. Expand
Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination
The method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries, which outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance. Expand
Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions
A focus is on the Tucker and tensor train TT decompositions and their extensions, and on demonstrating the ability of tensor network to provide linearly or even super-linearly e.g., logarithmically scalablesolutions, as illustrated in detail in Part 2 of this monograph. Expand
Linear image coding for regression and classification using the tensor-rank principle
  • A. Shashua, Anat Levin
  • Mathematics, Computer Science
  • Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001
  • 2001
It is found that for regression the tensor-rank coding, as a dimensionality reduction technique, significantly outperforms other techniques like PCA. Expand
Tensor-Train Decomposition
  • I. Oseledets
  • Mathematics, Computer Science
  • SIAM J. Sci. Comput.
  • 2011
The new form gives a clear and convenient way to implement all basic operations efficiently, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator. Expand
Tensor Decompositions and Applications
This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-orderExpand
Canonical Polyadic Decomposition with Orthogonality Constraints
Canonical Polyadic Decomposition (CPD) of a higher-order tensor is an important tool in mathematical engineering. In many applications at least one of the matrix factors is constrained to beExpand
Canonical Polyadic Decomposition with a Columnwise Orthonormal Factor Matrix
Orthogonality-constrained versions of the CPD methods based on simultaneous matrix diagonalization and alternating least squares are presented and a simple proof of the existence of the optimal low-rank approximation of a tensor in the case that a factor matrix is columnwise orthonormal is given. Expand
Blind Identification of Underdetermined Mixtures by Simultaneous Matrix Diagonalization
Conditions under which the mixing matrix is unique are presented and several algorithms for its computation are discussed, including a generalization to underdetermined mixtures of the well-known SOBI algorithm. Expand