• Publications
  • Influence
Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems
TLDR
A novel method is proposed, which starts with an initial guess computed by means of a spectral method and proceeds by minimizing a nonconvex functional as in the Wirtinger flow approach, and achieves a statistical accuracy, which is nearly unimprovable.
Robust Spectral Compressed Sensing via Structured Matrix Completion
TLDR
This paper develops a novel algorithm, called enhanced matrix completion (EMaC), based on structured matrix completion that does not require prior knowledge of the model order to recover a spectrally sparse signal from a small random subset of its n time domain samples.
Exact and Stable Covariance Estimation From Quadratic Sampling via Convex Programming
TLDR
This paper explores a quadratic (or rank-one) measurement model which imposes minimal memory requirements and low computational complexity during the sampling process, and is shown to be optimal in preserving various low-dimensional covariance structures.
Near-Optimal Joint Object Matching via Convex Relaxation
TLDR
An algorithm to jointly match multiple objects that exhibit only partial similarities, given a few pairwise matches that are densely corrupted, is developed called MatchLift, following a spectral method that pre-estimates the total number of distinct elements to be matched.
Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview
TLDR
This tutorial-style overview highlights the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees and reviews two contrasting approaches: two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and global landscape analysis and initialization-free algorithms.
Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization
TLDR
This work develops nonasymptotic convergence guarantees for entropy-regularized NPG methods under softmax parameterization, focusing on tabular discounted Markov decision processes and demonstrates that the algorithm converges linearly at an astonishing rate that is independent of the dimension of the state-action space.
Compressive Two-Dimensional Harmonic Retrieval via Atomic Norm Minimization
TLDR
It is demonstrated that under some mild spectral separation condition, it is possible to exactly recover all frequencies by solving an atomic norm minimization program, as long as the sample complexity exceeds the order of rlogrlogn.
Gradient descent with random initialization: fast global convergence for nonconvex phase retrieval
TLDR
This paper provides the first global convergence guarantee concerning vanilla gradient descent for phase retrieval, without the need of (i) carefully-designed initialization, (ii) sample splitting, or (iii) sophisticated saddle-point escaping schemes.
Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion, and Blind Deconvolution
TLDR
By marrying statistical modeling with generic optimization theory, a general recipe for analyzing the trajectories of iterative algorithms via a leave-one-out perturbation argument is developed, establishing that gradient descent achieves near-optimal statistical and computational guarantees without explicit regularization.
Spectral Method and Regularized MLE Are Both Optimal for Top-$K$ Ranking
TLDR
It is demonstrated that under a natural random sampling model, the spectral method alone, or the regularized MLE alone, is minimax optimal in terms of the sample complexity - the number of paired comparisons needed to ensure exact top-K identification, for the fixed dynamic range regime.
...
...