#### Filter Results:

- Full text PDF available (30)

#### Publication Year

2010

2017

- This year (7)
- Last 5 years (33)
- Last 10 years (34)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Prateek Jain, Praneeth Netrapalli, Sujay Sanghavi
- STOC
- 2013

Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge… (More)

- Praneeth Netrapalli, Prateek Jain, Sujay Sanghavi
- IEEE Transactions on Signal Processing
- 2013

Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. More than four decades after it was first proposed, the seminal error reduction algorithm of Gerchberg and Saxton and Fienup is still the popular choice for solving many variants of this problem. The algorithm is based on alternating… (More)

- Alekh Agarwal, Anima Anandkumar, Prateek Jain, Praneeth Netrapalli
- SIAM Journal on Optimization
- 2016

We consider the problem of sparse coding, where each sample consists of a sparse linear combination of a set of dictionary atoms, and the task is to learn both the dictionary elements and the mixing coefficients. Alternating minimization is a popular heuristic for sparse coding, where the dictionary and the coefficients are estimated in alternate steps,… (More)

- Praneeth Netrapalli, Sujay Sanghavi
- SIGMETRICS
- 2012

We consider the problem of finding the graph on which an epidemic spreads, given <i>only</i> the times when each node gets infected. While this is a problem of central importance in several contexts -- offline and online social networks, e-commerce, epidemiology -- there has been very little work, analytical or empirical, on finding the graph. Clearly, it… (More)

We propose a new method for robust PCA – the task of recovering a low-rank matrix from sparse corruptions that are of unknown value and support. Our method involves alternating between projecting appropriate residuals onto the set of low-rank matrices, and the set of sparse matrices; each projection is non-convex but easy to compute. In spite of this… (More)

- Praneeth Netrapalli, Siddhartha Banerjee, Sujay Sanghavi, Sanjay Shakkottai
- 2010 48th Annual Allerton Conference on…
- 2010

Markov Random Fields (MRFs), a.k.a. Graphical Models, serve as popular models for networks in the social and biological sciences, as well as communications and signal processing. A central problem is one of structure learning or model selection: given samples from the MRF, determine the graph structure of the underlying distribution. When the MRF is not… (More)

- Jess Banks, Cristopher Moore, Joe Neeman, Praneeth Netrapalli
- COLT
- 2016

We give upper and lower bounds on the information-theoretic threshold for community detection in the stochastic block model. Specifically, let k be the number of groups, d be the average degree, the probability of edges between vertices within and between groups be cin/n and cout/n respectively, and let λ = (cin − cout)/(kd). We show that, when k is large,… (More)

- Alekh Agarwal, Anima Anandkumar, Praneeth Netrapalli
- ArXiv
- 2013

We consider the problem of learning overcomplete dictionaries in the context of sparse coding, where each sample selects a sparse subset of dictionary elements. Our method consists of two stages, viz., initial estimation of the dictionary, and a clean-up phase involving estimation of the coefficient matrix, and re-estimation of the dictionary. We prove that… (More)

- Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan
- ICML
- 2017

This paper shows that a perturbed form of gradient descent converges to a second-order stationary point in a number iterations which depends only poly-logarithmically on dimension (i.e., it is almost “dimension-free”). The convergence rate of this procedure matches the well-known convergence rate of gradient descent to first-order stationary points, up to… (More)

- Chi Jin, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli, Aaron Sidford
- ArXiv
- 2015

In this paper we provide faster algorithms and improved sample complexities for approximating the top eigenvector of a matrix A>A. In particular we give the following results for computing an approximate eigenvector i.e. some x such that x>A>Ax ≥ (1− )λ1(AA): • Offline Eigenvector Estimation: Given an explicit matrix A ∈ Rn×d, we show how to compute an… (More)