Faster Matrix Completion Using Randomized SVD

@article{Feng2018FasterMC,
  title={Faster Matrix Completion Using Randomized SVD},
  author={Xu Feng and Wenjian Yu and Yaohang Li},
  journal={2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)},
  year={2018},
  pages={608-615}
}
  • Xu Feng, Wenjian Yu, Yaohang Li
  • Published 16 October 2018
  • Computer Science
  • 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI)
Matrix completion is a widely used technique for image inpainting and personalized recommender system, etc. In this work, we focus on accelerating the matrix completion using faster randomized singular value decomposition (rSVD). Firstly, two fast randomized algorithms (rSVD-PI and rSVDBKI) are proposed for handling sparse matrix. They make use of an eigSVD procedure and several accelerating skills. Then, with the rSVD-BKI algorithm and a new subspace recycling technique, we accelerate the… 

Figures and Tables from this paper

Efficient GPU implementation of randomized SVD and its applications

TLDR
This work reformulate the randomized decomposition problem to incorporate fast matrix multiplication operations (BLAS-3) as building blocks and shows that this formulation, combined with fast random number generators, allows to fully exploit the potential of parallel processing implemented in GPUs.

Randomized Algorithms for Computation of Tucker Decomposition and Higher Order SVD (HOSVD)

TLDR
This article reviews recent advances in randomization for computation of Tucker decomposition and Higher Order SVD and discusses random projection and sampling approaches, single-pass and multi-pass randomized algorithms and how to utilize them in the computation of the Tucker decompose and the HOSVD.

Bayesian Low-rank Matrix Completion with Dual-graph Embedding: Prior Analysis and Tuning-free Inference

TLDR
A novel Bayesian learning algorithm is proposed that automatically learns the hyper-parameters associated with dual-graph regularization, and at the same time, guarantees the low-rankness of matrix completion.

Masked-RPCA: Moving Object Detection With an Overlaying Model

TLDR
The Masked-RPCA (MRPCA) algorithm is introduced to recover the mask (hence the sparse object) and the low-rank components simultaneously, via a non-convex formulation, using convex surrogates and methods based on the additive model.

High-dimensional fast convolutional framework for calibrationless MRI

TLDR
The preliminary results show the promise of HICU in 2D static and 2D dynamic applications; moreover, the proposed framework can be readily extended to other multi-coil MRI applications.

Robust Three-Microphone Speech Source Localization Using Randomized Singular Value Decomposition

TLDR
A novel DOA technique based on randomized singular value decomposition (RSVD) to improve the performance of non-uniform non-linear microphone arrays (NUNLA) and shows an efficient real-time implementation on a Pixel 3 Android smartphone using its built-in three microphones for hearing aid applications.

Implicit steepest descent algorithm for optimization with orthogonality constraints

TLDR
This work proposes a new framework that combines the steepest gradient descent, using implicit information, with a projection operator in order to construct a feasible sequence of points and shows that the new procedure can outperform some state-of-the-art solvers on some practically problems.

Solving Weighted Orthogonal Procrustes Problems via a Projected Gradient Method

This paper proposes a new line search method to deal with weighted orthogonal procrustes problems. The nonmonotone line search of Zhang and Hager and an adaptive Barzilai–Borwein step size are the

Selecting Regularization Parameters for Nuclear Norm--Type Minimization Problems

. The reconstruction of low-rank matrix from its noisy observation finds its usage in many applications. It can be reformulated into a constrained nuclear norm minimization problem, where the bound η

Randomized block Krylov space methods for trace and log-determinant estimators

We present randomized algorithms based on block Krylov space method for estimating the trace and log-determinant of Hermitian positive semi-definite matrices. Using the properties of Chebyshev

References

SHOWING 1-10 OF 25 REFERENCES

Improved Bounded Matrix Completion for Large-Scale Recommender Systems

TLDR
This paper proposes the new approach for solving BMC under the ADMM framework, which can reach a lower objective function value, obtain a higher prediction accuracy and have better scalability compared with existing bounded matrix completion approaches.

Compressed Singular Value Decomposition for Image and Video Processing

TLDR
The compressed singular value decomposition (cSVD) algorithm employs aggressive random test matrices to efficiently sketch the row space of the input matrix and enables the computation of an accurate approximation of the dominant high-dimensional left and right singular vectors.

Randomized methods for matrix computations and analysis of high dimensional data

TLDR
A number of randomized techniques that have recently been proposed for computing matrix factorizations and for analyzing high dimensional data sets are surveyed, including so called "structure preserving" factorizations such as the Interpolative Decomposition (ID) and the CUR decomposition.

RSVDPACK: An implementation of randomized algorithms for computing the singular value, interpolative, and CUR decompositions of matrices on multi-core and GPU architectures

TLDR
This manuscript presents some modifications to the basic algorithms that improve performance and ease of use of RSVDPACK.

Matrix completion by Truncated Nuclear Norm Regularization

TLDR
This paper proposes a novel matrix completion algorithm based on the Truncated Nuclear Norm Regularization (TNNR) by only minimizing the smallest N-r singular values, where N is the number of singular values and r is the rank of the matrix.

Randomized Block Krylov Methods for Stronger and Faster Approximate Singular Value Decomposition

TLDR
This analysis is the first of a Krylov subspace method that does not depend on singular value gaps, which are unreliable in practice, and clarifies how simple techniques can take advantage of common matrix properties to significantly improve runtime.

Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

TLDR
This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation, and presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions.

A Singular Value Thresholding Algorithm for Matrix Completion

TLDR
This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank, and develops a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.

Sketching as a Tool for Numerical Linear Algebra

TLDR
This survey highlights the recent advances in algorithms for numericallinear algebra that have come from the technique of linear sketching, and considers least squares as well as robust regression problems, low rank approximation, and graph sparsification.