• Corpus ID: 17515447

A Universal Variance Reduction-Based Catalyst for Nonconvex Low-Rank Matrix Recovery

@article{Wang2017AUV,
  title={A Universal Variance Reduction-Based Catalyst for Nonconvex Low-Rank Matrix Recovery},
  author={Lingxiao Wang and Xiao Zhang and Quanquan Gu},
  journal={arXiv: Machine Learning},
  year={2017}
}
We propose a generic framework based on a new stochastic variance-reduced gradient descent algorithm for accelerating nonconvex low-rank matrix recovery. Starting from an appropriate initial estimator, our proposed algorithm performs projected gradient descent based on a novel semi-stochastic gradient specifically designed for low-rank matrix recovery. Based upon the mild restricted strong convexity and smoothness conditions, we derive a projected notion of the restricted Lipschitz continuous… 

Figures from this paper

A Unified Framework for Nonconvex Low-Rank plus Sparse Matrix Recovery
TLDR
A novel structural Lipschitz gradient condition for low-rank plus sparse matrices is essential for proving the linear convergence rate of the generic algorithm, and is of independent interest to prove fast rates for general superposition-structured models.
A Nonconvex Free Lunch for Low-Rank plus Sparse Matrix Recovery
TLDR
A generic and efficient nonconvex optimization algorithm based on projected gradient descent and double thresholding operator, with much lower computational complexity is proposed, which enables exact recovery of the unknown low-rank and sparse matrices in the noiseless setting.
A Unified Framework for Low-Rank plus Sparse Matrix Recovery
TLDR
A novel structural Lipschitz gradient condition for low-rank plus sparse matrices is essential for proving the linear convergence rate of the proposed generic algorithm, and is of independent interest to prove fast rates for general superposition-structured models.
Simple and practical algorithms for 𝓁p-norm low-rank approximation
TLDR
This work proposes practical algorithms for entrywise `p-norm low-rank approximation, for p = 1 or p = ∞, and shows that the proposed scheme can attain (1 + ε)OPT approximations.
Fast Low-Rank Matrix Estimation without the Condition Number
TLDR
This paper provides a novel algorithmic framework that achieves the best of both worlds: asymptotically as fast as factorization methods, while requiring no dependency on the condition number.
On Global Linear Convergence in Stochastic Nonconvex Optimization for Semidefinite Programming
TLDR
An answer is provided that the stochastic gradient descent method can be adapted to solve the nonconvex reformulation of the original convex problem, with a global linear convergence when using a fixed step size, i.e., converging exponentially fast to the population minimizer within an optimal statistical precision in the restricted strongly convex case.
On Stochastic Variance Reduced Gradient Method for Semidefinite Optimization
TLDR
Equipped with this, the global linear submanifold convergence of the proposed SVRG method with Option I is established, given a provable initialization scheme and under certain smoothness and restricted strongly convex assumptions.
A Unified Framework for Stochastic Matrix Factorization via Variance Reduction
TLDR
This work proposes a unified framework to speed up the existing stochastic matrix factorization (SMF) algorithms via variance reduction and demonstrates that this framework consistently yields faster convergence and a more accurate output dictionary vis-a-vis state-of-the-art frameworks.
Efficient Algorithm for Sparse Tensor-variate Gaussian Graphical Models via Gradient Descent
TLDR
An e cient alternating gradient descent algorithm is proposed to solve this likelihood based estimator of the sparse tensor-variate Gaussian graphical model, and it is proved that this algorithm is guaranteed to linearly converge to the unknown precision matrices up to the optimal statistical error.
Simple and practical algorithms for $\ell_p$-norm low-rank approximation
TLDR
The theory indicates what problem quantities need to be known, in order to get a good solution within polynomial time, and does not contradict to recent inapproximabilty results, as in [46].
...
...

References

SHOWING 1-10 OF 66 REFERENCES
A Unified Computational and Statistical Framework for Nonconvex Low-rank Matrix Estimation
TLDR
This work develops a new initialization algorithm to provide a desired initial estimator, which outperforms existing initialization algorithms for nonconvex low-rank matrix estimation and illustrates the superiority of the framework through three examples: matrix regression, matrix completion, and one-bit matrix completion.
Towards Faster Rates and Oracle Property for Low-Rank Matrix Estimation
TLDR
A proximal gradient homotopy algorithm is developed to solve the proposed optimization problem and it is proved that the proposed estimator attains a faster statistical rate than the traditional low-rank matrix estimator with nuclear norm penalty.
A Nonconvex Optimization Framework for Low Rank Matrix Estimation
TLDR
It is proved that a broad class of nonconvex optimization algorithms, including alternating minimization and gradient-type methods, geometrically converge to the global optimum and exactly recover the true low rank matrices under standard conditions.
Stochastic Variance-reduced Gradient Descent for Low-rank Matrix Recovery from Linear Measurements
TLDR
This work proposes an efficient stochastic variance reduced gradient descent algorithm to solve a nonconvex optimization problem of matrix sensing and proves that the algorithm converges to the unknown low-rank matrix at a linear rate up to the minimax optimal statistical error.
Global Convergence of Stochastic Gradient Descent for Some Non-convex Matrix Problems
TLDR
This paper exhibits a step size scheme for SGD on a low-rank least-squares problem, and proves that, under broad sampling conditions, the method converges globally from a random starting point within $O(\epsilon^{-1} n \log n)$ steps with constant probability for constant-rank problems.
Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees
TLDR
This work provides a simple set of conditions under which projected gradient descent, when given a suitable initialization, converges geometrically to a statistically useful solution to the factorized optimization problem with rank constraints.
Nonconvex Sparse Learning via Stochastic Optimization with Progressive Variance Reduction
TLDR
A stochastic variance reduced optimization algorithm for solving sparse learning problems with cardinality constraints that enjoys strong linear convergence guarantees and optimal estimation accuracy in high dimensions is proposed.
Stochastic Variance Reduction for Nonconvex Optimization
TLDR
This work proves non-asymptotic rates of convergence of SVRG for nonconvex optimization, and shows that it is provably faster than SGD and gradient descent.
Estimation of (near) low-rank matrices with noise and high-dimensional scaling
TLDR
Simulations show excellent agreement with the high-dimensional scaling of the error predicted by the theory, and illustrate their consequences for a number of specific learning models, including low-rank multivariate or multi-task regression, system identification in vector autoregressive processes, and recovery of low- rank matrices from random projections.
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
TLDR
This work proposes an incremental majorization-minimization scheme for minimizing a large sum of continuous functions, a problem of utmost importance in machine learning, and presents convergence guarantees for nonconvex and convex optimization when the upper bounds approximate the objective up to a smooth error.
...
...