• Publications
  • Influence
Tensor decompositions for learning latent variable models
tl;dr
This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models--including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation--which exploits a certain tensor structure in their low-order observable moments. Expand
  • 777
  • 154
  • Open Access
A Method of Moments for Mixture Models and Hidden Markov Models
tl;dr
This work develops a computationally efficient method of moments based on only low-order moments that can be used to estimate the parameters of a broad class of high-dimensional mixture models with many components, including multi-view mixtures of Gaussians. Expand
  • 266
  • 50
  • Open Access
Non-convex Robust PCA
tl;dr
We propose a new provable method for robust PCA, where the task is to recover a low-rank matrix, which is corrupted with sparse perturbations. Expand
  • 198
  • 49
  • Open Access
Learning Latent Tree Graphical Models
tl;dr
We study the problem of learning a latent tree graphical model where samples are available only from a subset of variables. Expand
  • 192
  • 49
  • Open Access
signSGD: compressed optimisation for non-convex problems
tl;dr
We prove that signSGD can get the best of both worlds: compressed gradients and SGD-level convergence rate. Expand
  • 203
  • 45
  • Open Access
A Spectral Algorithm for Latent Dirichlet Allocation
tl;dr
This work provides a simple and efficient learning procedure that is guaranteed to recover the parameters for a wide class of multi-view models and topic models, including latent Dirichlet allocation (LDA). Expand
  • 249
  • 45
  • Open Access
Distributed Algorithms for Learning and Cognitive Medium Access with Logarithmic Regret
tl;dr
We propose policies for distributed learning and access which achieve order-optimal cognitive system throughput (number of successful secondary transmissions) under self play, i.e., when implemented at all the secondary users. Expand
  • 259
  • 44
  • Open Access
Born Again Neural Networks
tl;dr
Knowledge distillation (KD) consists of transferring knowledge from one machine learning model (the teacher}) to another (the student). Expand
  • 205
  • 28
  • Open Access
Stochastic Activation Pruning for Robust Adversarial Defense
tl;dr
We propose Stochastic Activation Pruning (SAP), a mixed strategy for adversarial defense. Expand
  • 218
  • 23
  • Open Access
Opportunistic Spectrum Access with Multiple Users: Learning under Competition
tl;dr
The problem of cooperative allocation among multiple secondary users to maximize cognitive system throughput is considered. Expand
  • 167
  • 19
  • Open Access