• Corpus ID: 29311714

Online Bayesian Learning in Probabilistic Graphical Models using Moment Matching with Applications

@inproceedings{Omar2016OnlineBL,
  title={Online Bayesian Learning in Probabilistic Graphical Models using Moment Matching with Applications},
  author={Farheen Omar},
  year={2016}
}
  • F. Omar
  • Published 18 May 2016
  • Computer Science
Probabilistic Graphical Models are often used to efficiently encode uncertainty in real world problems as probability distributions. Bayesian learning allows us to compute a posterior distribution over the parameters of these distributions based on observed data. One of the main challenges in Bayesian learning is that the posterior distribution can become exponentially complex as new data becomes available. Secondly, many algorithms require all the data to be present in memory before the… 
Online Bayesian Moment Matching for Topic Modeling with Unknown Number of Topics
TLDR
Two new models that extend LDA in a simple and intuitive fashion by directly expressing a distribution over the number of topics are proposed and a new online Bayesian moment matching technique to learn the parameters and theNumber of topics of those models based on streaming data is proposed.
Likelihood-based Density Estimation using Deep Architectures
TLDR
This thesis provides a principled study of parametric density estimation methods using mixture models and triangular maps for neural density estimation and provides a unified framework for estimating densities using monotone and bijective triangular maps represented using deep neural networks.
Online Bayesian Transfer Learning for Sequential Data Modeling
TLDR
This work combines hidden Markov models with Gaussian mixture models that are learned based on streaming data by online Bayesian moment matching and demonstrates the resulting transfer learning technique with three real-word applications.
Reinforcement Learning with Multiple Experts: A Bayesian Model Combination Approach
TLDR
This paper applies Bayesian Model Combination with multiple experts in a way that learns to trust a good combination of experts as training progresses, and is shown numerically to improve convergence across discrete and continuous domains and different reinforcement learning algorithms.
Online Bayesian Moment Matching based SAT Solver Heuristics
TLDR
A Bayesian Moment Matching (BMM) based method aimed at solving the initialization problem in Boolean SAT solvers significantly out-perform the same solver using all other initialization methods by 12 additional instances solved and better average runtime, over the SAT 2018 competition benchmark.
AYESIAN T RANSFER L EARNING FOR S EQUENTIAL D ATA M ODELING
TLDR
This work combines hidden Markov models with Gaussian mixture models that are learned based on streaming data by online Bayesian moment matching and demonstrates the resulting transfer learning technique with three real-word applications.
Online flow size prediction for improved network routing
TLDR
An emerging application of data mining in the context of computer networks concerns the problem of predicting the size of a flow and detecting elephant flows and the predictive nature of a set of features and the accuracy of three online predictors based on neural networks, Gaussian process regression and online Bayesian Moment Matching are evaluated.
Priority Policy in Multi-Queue Data Center Networks via per-Port ECN Marking
TLDR
The proposed Priority-ECN uses an approach similar to Cut Payload (CP), which drops the payloads of packets, rather than the metadata, when a queue reaches the threshold, to implement an efficient scheme to obtains near optimal flow completion times across different flow sizes with very low latency.
CDCL(Crypto) and Machine Learning based SAT Solvers for Cryptanalysis
TLDR
An approach called CDCL(Crypto) is described to tailor the internal subroutines of the CDCL SAT solver with domain-specific knowledge about cryptographic primitives, and a formulation of SAT into Bayesian moment matching to address heuristic initialization problem in SAT solvers is used.

References

SHOWING 1-10 OF 77 REFERENCES
Variational algorithms for approximate Bayesian inference
TLDR
A unified variational Bayesian (VB) framework which approximates computations in models with latent variables using a lower bound on the marginal likelihood and is compared to other methods including sampling, Cheeseman-Stutz, and asymptotic approximations such as BIC.
Online EM Algorithm for Hidden Markov Models
TLDR
Although the proposed online EM algorithm resembles a classical stochastic approximation (or Robbins–Monro) algorithm, it is sufficiently different to resist conventional analysis of convergence and provides limited results which identify the potential limiting points of the recursion as well as the large-sample behavior of the quantities involved in the algorithm.
Online Learning with Hidden Markov Models
TLDR
An online version of the expectation-maximization (EM) algorithm for hidden Markov models (HMMs) is presented, generalized to the case where the model parameters can change with time by introducing a discount factor into the recurrence relations.
Learning Topic Models by Belief Propagation
TLDR
The collapsed LDA is represented as a factor graph, which enables the classic loopy belief propagation (BP) algorithm for approximate inference and parameter estimation and is validated by encouraging experimental results on four large-scale document datasets.
Stochastic variational inference for hidden Markov models
TLDR
An SVI algorithm is developed that harnesses the memory decay of the chain to adaptively bound errors arising from edge effects and demonstrates the effectiveness of the algorithm on synthetic experiments and a large genomics dataset where a batch algorithm is computationally infeasible.
A Method of Moments for Mixture Models and Hidden Markov Models
TLDR
This work develops an efficient method of moments approach to parameter estimation for a broad class of high-dimensional mixture models with many components, including multi-view mixtures of Gaussians (such as mixture of axis-aligned Gaussian and hidden Markov models).
Expectation Propagation for approximate Bayesian inference
TLDR
Expectation Propagation approximates the belief states by only retaining expectations, such as mean and varitmce, and iterates until these expectations are consistent throughout the network, which makes it applicable to hybrid networks with discrete and continuous nodes.
An Introduction to Hidden Markov Models and Bayesian Networks
TLDR
A tutorial on learning and inference in hidden Markov models in the context of the recent literature on Bayesian networks is provided, and a discussion of Bayesian methods for model selection in generalized HMMs is discussed.
An Online Spectral Learning Algorithm for Partially Observable Nonlinear Dynamical Systems
TLDR
A new online spectral algorithm is proposed, which uses tricks such as incremental Singular Value Decomposition (SVD) and random projections to scale to much larger data sets and more complex systems than previous methods.
Bayesian online algorithms for learning in discrete hidden Markov models
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the
...
...