• Corpus ID: 220250337

$\alpha$ Belief Propagation for Approximate Inference

  title={\$\alpha\$ Belief Propagation for Approximate Inference},
  author={Dong Liu and Minh Th{\`a}nh Vu and Zuxing Li and Lars Kildeh{\o}j Rasmussen},
  journal={arXiv: Machine Learning},
Belief propagation (BP) algorithm is a widely used message-passing method for inference in graphical models. BP on loop-free graphs converges in linear time. But for graphs with loops, BP's performance is uncertain, and the understanding of its solution is limited. To gain a better understanding of BP in general graphs, we derive an interpretable belief propagation algorithm that is motivated by minimization of a localized $\alpha$-divergence. We term this algorithm as $\alpha$ belief… 

Figures from this paper



Fast Convergence of Belief Propagation to Global Optima: Beyond Correlation Decay

BP converges quickly to the global optimum of the Bethe free energy for Ising models on arbitrary graphs, as long as the Ising model is \emph{ferromagnetic} (i.e. neighbors prefer to be aligned).

Understanding belief propagation and its generalizations

It is shown that BP can only converge to a fixed point that is also a stationary point of the Bethe approximation to the free energy, which enables connections to be made with variational approaches to approximate inference.

Generalized Belief Propagation

It is shown that BP can only converge to a stationary point of an approximate free energy, known as the Bethe free energy in statistical physics, and generalized belief propagation (GBP) versions of these Kikuchi approximations are derived.

A family of algorithms for approximate Bayesian inference

This thesis presents an approximation technique that can perform Bayesian inference faster and more accurately than previously possible, and is found to be convincingly better than rival approximation techniques: Monte Carlo, Laplace's method, and variational Bayes.

Stochastic Belief Propagation: A Low-Complexity Alternative to the Sum-Product Algorithm

Stochastic belief propagation is proposed, an adaptively randomized version of the BP message updates in which each node passes randomly chosen information to each of its neighbors, and can provably yield reductions in computational and communication complexities for various classes of graphical models.

Expectation Propagation for approximate Bayesian inference

Expectation Propagation approximates the belief states by only retaining expectations, such as mean and varitmce, and iterates until these expectations are consistent throughout the network, which makes it applicable to hybrid networks with discrete and continuous nodes.

Fractional Belief Propagation

Fractional belief propagation is formulated in terms of a family of approximate free energies, which includes the Bethe free energy and the naive mean-field free as special cases, and using the linear response correction of the clique marginals, the scale parameters can be tuned.

Pseudo Prior Belief Propagation for densely connected discrete graphs

  • J. GoldbergerAmir Leshem
  • Computer Science
    2010 IEEE Information Theory Workshop on Information Theory (ITW 2010, Cairo)
  • 2010
This paper proposes a new algorithm for the linear least squares problem where the unknown variables are constrained to be in a finite set and uses the minimum mean square error (MMSE) detection to yield a pseudo prior information on each variable.

Inference in Probabilistic Graphical Models by Graph Neural Networks

This work uses Graph Neural Networks (GNNs) to learn a message-passing algorithm that solves inference tasks and demonstrates the efficacy of this inference approach by training GNNs on a collection of graphical models and showing that they substantially outperform belief propagation on loopy graphs.

Correctness of Local Probability Propagation in Graphical Models with Loops

An analytical relationship is derived between the probabilities computed using local propagation and the correct marginals and a category of graphical models with loops for which local propagation gives rise to provably optimal maximum a posteriori assignments (although the computed marginals will be incorrect).