Corpus ID: 14828734

Bayesian Mixtures of Bernoulli Distributions

@inproceedings{Maaten2010BayesianMO,
  title={Bayesian Mixtures of Bernoulli Distributions},
  author={L. V. D. Maaten},
  year={2010}
}
The mixture of Bernoulli distributions [6] is a technique that is frequently used for the modeling of binary random vectors. They differ from (restricted) Boltzmann Machines in that they do not model the marginal distribution over the binary data space X as a product of (conditional) Bernoulli distributions, but as a weighted sum of Bernoulli distributions. Despite the non-identifiability of the mixture of Bernoulli distributions [3], it has been successfully used to, e.g., dichotomous… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 11 REFERENCES
Practical Identifiability of Finite Mixtures of Multivariate Bernoulli Distributions
TLDR
Empirical support is given to the fact that estimation of this class of mixtures of multivariate Bernoulli distributions can still produce meaningful results in practice, thus lessening the importance of the identifiability problem. Expand
The Mixture of Bernoulli Experts: a theory to quantify reliance on cues in dichotomous perceptual decisions.
  • B. Backus
  • Psychology, Medicine
  • Journal of vision
  • 2009
TLDR
A simple Bayesian theory for dichotomous perceptual decisions: the Mixture of Bernoulli Experts or MBE is described, in which a cue's subjective reliability is the product of a weight and an estimate of the cue's ecological validity. Expand
Markov Chain Sampling Methods for Dirichlet Process Mixture Models
Abstract This article reviews Markov chain methods for sampling from the posterior distribution of a Dirichlet process mixture model and presents two new classes of methods. One new approach is toExpand
Infinite latent feature models and the Indian buffet process
We define a probability distribution over equivalence classes of binary matrices with a finite number of rows and an unbounded number of columns. This distribution is suitable for use as a prior inExpand
The Infinite Gaussian Mixture Model
TLDR
This paper presents an infinite Gaussian mixture model which neatly sidesteps the difficult problem of finding the "right" number of mixture components and uses an efficient parameter-free Markov Chain that relies entirely on Gibbs sampling. Expand
A Bernoulli mixture model for word categorisation
TLDR
This paper describes a technique to build a word hierarchical structure through an efficient agglomerative hierarchical clustering algorithm, in a syntax-constrained task, and calls this algorithm efficient because it uses minheaps in order to avoid an extensive search of the nearest neighbour of each sample. Expand
The Infinite Hidden Markov Model
We show that it is possible to extend hidden Markov models to have a countably infinite number of hidden states. By using the theory of Dirichlet processes we can implicitly integrate out theExpand
On the use of Bernoulli mixture models for text classification
TLDR
This paper focuses on the application of mixtures of multivariate Bernoulli distributions to binary data, a text classification task aimed at improving language modelling for machine translation. Expand
Restricted Boltzmann machines for collaborative filtering
TLDR
This paper shows how a class of two-layer undirected graphical models, called Restricted Boltzmann Machines (RBM's), can be used to model tabular data, such as user's ratings of movies, and demonstrates that RBM's can be successfully applied to the Netflix data set. Expand
Hierarchical mixtures of experts and the EM algorithm
  • M. I. Jordan, R. Jacobs
  • Computer Science
  • Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan)
  • 1993
TLDR
An expectation-maximization (EM) algorithm for adjusting the parameters of the tree-structured architecture for supervised learning is presented and an online learning algorithm in which the parameters are updated incrementally is developed. Expand
...
1
2
...