• Corpus ID: 220280768

Belief Propagation Neural Networks

@article{Kuck2020BeliefPN,
  title={Belief Propagation Neural Networks},
  author={Jonathan Kuck and Shuvam Chakraborty and Hao Tang and Rachel Luo and Jiaming Song and Ashish Sabharwal and Stefano Ermon},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.00295}
}
Learned neural solvers have successfully been used to solve combinatorial optimization and decision problems. More general counting variants of these problems, however, are still largely solved with hand-crafted solvers. To bridge this gap, we introduce belief propagation neural networks (BPNNs), a class of parameterized operators that operate on factor graphs and generalize Belief Propagation (BP). In its strictest form, a BPNN layer (BPNN-D) is a learned iterative operator that provably… 

Figures and Tables from this paper

Deep learning via message passing algorithms based on belief propagation

This paper presents and adapt to mini-batch training on GPUs a family of BP-based message-passing algorithms with a reinforcement term that biases distributions towards locally entropic solutions, capable of training multi-layer neural networks with performance comparable to SGD heuristics in a diverse set of experiments on natural datasets.

Graph Neural Networks for Propositional Model Counting

This work presents an architecture based on the GNN framework for belief propagation of [15], extended with self-attentive GNN and trained to approximately solve the #SAT problem, showing that this model is able to scale effectively to much larger problem sizes, with comparable or better performances of state of the art approximate solvers.

Neural Enhanced Belief Propagation on Factor Graphs

This work proposes a new hybrid model that runs conjointly a FG-GNN with belief propagation and applies the ideas to error correction decoding tasks, and shows that the algorithm can outperform belief propagation for LDPC codes on bursty channels.

Graph Belief Propagation Networks

This work introduces a model that combines the advantages of these two approaches, where the marginal probabilities in a conditional random field, similar to collective classification, and the potentials in the random field are learned through end-to-end training, akin to graph neural networks.

A visual introduction to Gaussian Belief Propagation

This article presents a visual introduction to Gaussian Belief Propagation, an approximate probabilistic inference algorithm that operates by passing messages between the nodes of arbitrarily structured factor graphs that has the right computational properties to act as a scalable distributed probabilism inference framework for future machine learning systems.

Robust Deep Learning from Crowds with Belief Propagation

A neural-powered Bayesian framework is established, from which deepMF and deepBP are devise with different choice of variational approximation methods, mean field (MF) and belief propagation (BP), respectively, which provides a unified view of existing methods, which are special cases of deepMF with di-erent priors.

Neural Belief Propagation for Scene Graph Generation

A novel neural belief propagation method that employs a structural Bethe approximation rather than the mean field approximation to infer the associated marginals and achieves the state-of-the-art performance on various popular scene graph generation benchmarks.

Equivariant Neural Network for Factor Graphs

This paper precisely characterize these isomorphic properties of factor graphs and proposes two inference models: FactorEquivariant Neural Belief Propagation (FE-NBP and FE-GNN), a neural network that generalizes BP and respects each of the above properties.

IBIA: Bayesian Inference via Incremental Build-Infer-Approximate operations on Clique Trees

It is shown that the SLCTF data structure can be used for efficient approximate inference of partition function and prior and posterior marginals and it is proved that the algorithm for incremental construction of clique trees always generates a valid CT and the approximation technique preserves the joint beliefs of the variables within a clique.

Understanding Non-linearity in Graph Neural Networks from the Bayesian-Inference Perspective

This work resorts to Bayesian learning to deeply investigate the functions of non-linearity in GNNs for node classi-cation tasks and proves that the superiority of those ReLU activations is only significant when the node attributes are far more informative than the graph structure, which nicely matches many previous empirical observations.

References

SHOWING 1-10 OF 54 REFERENCES

Fast Convergence of Belief Propagation to Global Optima: Beyond Correlation Decay

BP converges quickly to the global optimum of the Bethe free energy for Ising models on arbitrary graphs, as long as the Ising model is \emph{ferromagnetic} (i.e. neighbors prefer to be aligned).

Neural Enhanced Belief Propagation on Factor Graphs

This work proposes a new hybrid model that runs conjointly a FG-GNN with belief propagation and applies the ideas to error correction decoding tasks, and shows that the algorithm can outperform belief propagation for LDPC codes on bursty channels.

Learning to Pass Expectation Propagation Messages

This work studies whether it is possible to automatically derive fast and accurate EP updates by learning a discriminative model to map EP message inputs to EP message outputs, and provides empirical analysis on several challenging and diverse factors, indicating that there is a space of factors where this approach appears promising.

Inference in Probabilistic Graphical Models by Graph Neural Networks

This work uses Graph Neural Networks (GNNs) to learn a message-passing algorithm that solves inference tasks and demonstrates the efficacy of this inference approach by training GNNs on a collection of graphical models and showing that they substantially outperform belief propagation on loopy graphs.

Learning a SAT Solver from Single-Bit Supervision

Although it is not competitive with state-of-the-art SAT solvers, NeuroSAT can solve problems that are substantially larger and more difficult than it ever saw during training by simply running for more iterations.

Learning to Solve NP-Complete Problems - A Graph Neural Network for the Decision TSP

This paper shows that GNNs can learn to solve the decision variant of the Traveling Salesperson Problem (TSP), a highly relevant $\mathcal{NP}$-Complete problem.

Adam: A Method for Stochastic Optimization

This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

On the Hardness of Approximate Reasoning

Constructing free-energy approximations and generalized belief propagation algorithms

This work explains how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms, and describes empirical results showing that GBP can significantly outperform BP.

Learning to Reason: Leveraging Neural Networks for Approximate DNF Counting

This paper proposes a neural model counting approach for weighted #DNF that combines approximate model counting with deep learning, and accurately approximates model counts in linear time when width is bounded.
...