# SGD_Tucker: A Novel Stochastic Optimization Strategy for Parallel Sparse Tucker Decomposition

@article{Li2021SGD\_TuckerAN, title={SGD\_Tucker: A Novel Stochastic Optimization Strategy for Parallel Sparse Tucker Decomposition}, author={Hao Li and Zixuan Li and KenLi Li and Jan S. Rellermeyer and Lydia Yiyu Chen and Keqin Li}, journal={IEEE Trans. Parallel Distributed Syst.}, year={2021}, volume={32}, pages={1828-1841} }

Sparse Tucker Decomposition (STD) algorithms learn a core tensor and a group of factor matrices to obtain an optimal low-rank representation feature for the \underline{H}igh-\underline{O}rder, \underline{H}igh-\underline{D}imension, and \underline{S}parse \underline{T}ensor (HOHDST). However, existing STD algorithms face the problem of intermediate variables explosion which results from the fact that the formation of those variables, i.e., matrices Khatri-Rao product, Kronecker product, and…

## Figures and Tables from this paper

## 4 Citations

cuFastTucker: A Compact Stochastic Strategy for Large-scale Sparse Tucker Decomposition on Multi-GPUs

- Computer Science
- 2022

A novel method for STD of Kruskal approximating the core tensor and stochastic strategy for approximation the whole gradient is proposed which comprises of the following two parts: the matrization unfolding order of the KruskAl product for the core Tensor follows the multiplication order ofThe factor matrix and the proposed theorem can reduce the exponential computational overhead into linear one.

Fast and Accurate Randomized Algorithms for Low-rank Tensor Decompositions

- Computer ScienceNeurIPS
- 2021

A fast and accurate sketched ALS algorithm for Tucker decomposition, which solves a sequence of sketched rank-constrained linear least squares subproblems, and which not only converges faster, but also yields more accurate CP decompositions.

Locality Sensitive Hash Aggregated Nonlinear Neighbourhood Matrix Factorization for Online Sparse Big Data Analysis

- Computer ScienceACM/IMS Transactions on Data Science
- 2021

This work proposes Locality Sensitive Hashing aggregated MF (LSH-MF), which can solve the following problems: the proposed probabilistic projection strategy of LSH-MF can avoid the construction of the GSM, and the requirement for the accurate projection of sparse big data can satisfy.

## References

SHOWING 1-10 OF 88 REFERENCES

Accelerating the Tucker Decomposition with Compressed Sparse Tensors

- Computer ScienceEuro-Par
- 2017

This work presents an algorithm based on a compressed data structure for sparse tensors and shows that many computational redundancies during TTMc can be identified and pruned without the memory overheads of memoization.

On optimizing distributed non-negative Tucker decomposition

- Computer ScienceICS
- 2019

This work develops three algorithms for efficiently executing the non-negative Tucker Decomposition procedure and presents a distributed implementation of NTD for sparse tensors that scales well with speedup up to 12x and improved algorithms that are optimized based on properties unique to the NTD procedure.

High-Performance Tucker Factorization on Heterogeneous Platforms

- Computer ScienceIEEE Transactions on Parallel and Distributed Systems
- 2019

GTA is proposed, a general framework for Tucker factorization on heterogeneous platforms that performs alternating least squares with a row-wise update rule in a fully parallel way, which significantly reduces memory requirements for updating factor matrices.

High Performance Parallel Algorithms for the Tucker Decomposition of Sparse Tensors

- Computer Science2016 45th International Conference on Parallel Processing (ICPP)
- 2016

A set of preprocessing steps which takes all computational decisions out of the main iteration of the algorithm and provides an intuitive shared-memory parallelism for the TTM and TRSVD steps are discussed.

On Optimizing Distributed Tucker Decomposition for Sparse Tensors

- Computer ScienceICS
- 2018

This work studies the problem of constructing the Tucker decomposition of sparse tensors on distributed memory systems via the HOOI procedure, a popular iterative method, and proposes a lightweight distribution scheme, which achieves the best of both worlds.

Scalable Tucker Factorization for Sparse Tensors - Algorithms and Discoveries

- Computer Science2018 IEEE 34th International Conference on Data Engineering (ICDE)
- 2018

P-Tucker is proposed, a scalable Tucker factorization method for sparse tensors that successfully discover hidden concepts and relations in a large-scale real-world tensor, while existing methods cannot reveal latent features due to their limited scalability or low accuracy.

VEST: Very Sparse Tucker Factorization of Large-Scale Tensors

- Computer Science2021 IEEE International Conference on Big Data and Smart Computing (BigComp)
- 2021

VEST, a tensor factorization method for large partially observable data to output a very sparse core tensor and factor matrices, and automatically searches for the best sparsity ratio that results in a balanced trade-off between sparsity and accuracy.

DisTenC: A Distributed Algorithm for Scalable Tensor Completion on Spark

- Computer Science2018 IEEE 34th International Conference on Data Engineering (ICDE)
- 2018

DisTenC is a new distributed large-scale tensor completion algorithm that can be distributed on Spark capable of handling up to 10~1000X larger tensors than existing methods with much faster convergence rate, shows better linearity on machine scalability, and achieves up to an average improvement of 23.5% in accuracy in applications.

A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion

- Computer ScienceSIAM J. Imaging Sci.
- 2013

This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each block of variables and proposes a generalized block coordinate descent method.

Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 2 Applications and Future Perspectives

- Computer ScienceFound. Trends Mach. Learn.
- 2017

This monograph builds on Tensor Networks for Dimensionality Reduction and Large-scale Optimization by discussing tensor network models for super-compressed higher-order representation of data/parameters and cost functions, together with an outline of their applications in machine learning and data analytics.