Corpus ID: 12082839

Spectral Methods for Nonparametric Models

@article{Tung2017SpectralMF,
  title={Spectral Methods for Nonparametric Models},
  author={H. Tung and Chao-Yuan Wu and M. Zaheer and Alex Smola},
  journal={ArXiv},
  year={2017},
  volume={abs/1704.00003}
}
Nonparametric models are versatile, albeit computationally expensive, tool for modeling mixture models. In this paper, we introduce spectral methods for the two most popular nonparametric models: the Indian Buffet Process (IBP) and the Hierarchical Dirichlet Process (HDP). We show that using spectral methods for the inference of nonparametric models are computationally and statistically efficient. In particular, we derive the lower-order moments of the IBP and the HDP, propose spectral… Expand
SpectralFPL: Online Spectral Learning for Single Topic Models
TLDR
This work develops a new online learning algorithm for latent variable models, which it is shown that SpectralLeader performs similarly to or better than the online EM with tuned hyper-parameters, and derives a sublinear upper bound on its $n$-step regret in the bag-of-words model. Expand
\mathttSpectralLeader : Online Spectral Learning for Single Topic Models
TLDR
A new online learning algorithm for latent variable models, which is calledSpectralLeader, which converges to the global optimum, and derives a sublinear upper bound on its n-step regret in a single topic model. Expand
L G ] 2 6 A pr 2 01 8 SpectralLeader : Online Spectral Learning for Single Topic Models
We study the problem of learning a latent variable model from a stream of data. Latent variable models are popular in practice because they can explain observed data in terms of unobserved concepts.Expand
Estimating posterior inference quality of the relational infinite latent feature model for overlapping community detection
TLDR
A flexible nonparametric Bayesian generative model for count-value networks, which can allow K to increase as more and more data are encountered instead of to be fixed in advance, is proposed. Expand
Sublinear Time Orthogonal Tensor Decomposition
TLDR
In a number of cases one can achieve the same theoretical guarantees in sublinear time, i.e., even without reading most of the input tensor, using importance sampling instead of using sketches to estimate inner products in tensor decomposition algorithms. Expand

References

SHOWING 1-10 OF 41 REFERENCES
Variational Inference for the Indian Buffet Process
TLDR
A deterministic variational method for inference in the IBP based on a truncated stick-breaking approximation is developed, theoretical bounds on the truncation error are provided, and the method is evaluated in several data regimes. Expand
Sharing Features among Dynamical Systems with Beta Processes
TLDR
This work develops an efficient Markov chain Monte Carlo inference method that is based on the Indian buffet process representation of the predictive distribution of the beta process, and uses the sum-product algorithm to efficiently compute Metropolis-Hastings acceptance probabilities. Expand
Variational inference for Dirichlet process mixtures
Dirichlet process (DP) mixture models are the cornerstone of non- parametric Bayesian statistics, and the development of Monte-Carlo Markov chain (MCMC) sampling methods for DP mixtures has enabledExpand
Spectral Methods for Learning Multivariate Latent Tree Structure
This work considers the problem of learning the structure of multivariate linear tree models, which include a variety of directed tree graphical models with continuous, discrete, and mixed latentExpand
Infinite latent feature models and the Indian buffet process
We define a probability distribution over equivalence classes of binary matrices with a finite number of rows and an unbounded number of columns. This distribution is suitable for use as a prior inExpand
Learning mixtures of spherical gaussians: moment methods and spectral decompositions
TLDR
Computationally efficient and statistically consistent moment-based estimator for mixtures of spherical Gaussians under the condition that component means are in general position is provided, without additional minimum separation assumptions needed by previous computationally efficient estimation procedures. Expand
Tensor decompositions for learning latent variable models
TLDR
A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin's perturbation theorem for the singular vectors of matrices, and implies a robust and computationally tractable estimation approach for several popular latent variable models. Expand
Hilbert Space Embeddings of Hidden Markov Models
TLDR
This work proposes a nonparametric HMM that extends traditional HMMs to structured and non-Gaussian continuous distributions, and derives a local-minimum-free kernel spectral algorithm for learning these HMMs. Expand
Infinite Sparse Factor Analysis and Infinite Independent Components Analysis
TLDR
Four variants of the nonparametric Bayesian extension of Independent Components Analysis are described, with Gaussian or Laplacian priors on X and the one or two-parameter IBPs, and Bayesian inference under these models is demonstrated using a Markov Chain Monte Carlo algorithm. Expand
Mixtures of Dirichlet Processes with Applications to Bayesian Nonparametric Problems
process. This paper extends Ferguson's result to cases where the random measure is a mixing distribution for a parameter which determines the distribution from which observations are made. TheExpand
...
1
2
3
4
5
...