• Corpus ID: 220280768

Belief Propagation Neural Networks

@article{Kuck2020BeliefPN,
  title={Belief Propagation Neural Networks},
  author={Jonathan Kuck and Shuvam Chakraborty and Hao Tang and Rachel Luo and Jiaming Song and Ashish Sabharwal and Stefano Ermon},
  journal={ArXiv},
  year={2020},
  volume={abs/2007.00295}
}
Learned neural solvers have successfully been used to solve combinatorial optimization and decision problems. More general counting variants of these problems, however, are still largely solved with hand-crafted solvers. To bridge this gap, we introduce belief propagation neural networks (BPNNs), a class of parameterized operators that operate on factor graphs and generalize Belief Propagation (BP). In its strictest form, a BPNN layer (BPNN-D) is a learned iterative operator that provably… 

Figures and Tables from this paper

Deep Attentive Belief Propagation: Integrating Reasoning and Learning for Solving Constraint Optimization Problems

This work proposes a novel self-supervised learning algorithm for DABP with a smoothed solution cost, which does not require expensive training labels and also avoids the common out-of-distribution issue through efficient online learning.

Deep learning via message passing algorithms based on belief propagation

This paper presents and adapt to mini-batch training on GPUs a family of BP-based message-passing algorithms with a reinforcement term that biases distributions towards locally entropic solutions, capable of training multi-layer neural networks with performance comparable to SGD heuristics in a diverse set of experiments on natural datasets.

Variational message passing neural network for Maximum-A-Posteriori (MAP) inference

A variational message passing neural network (V-MPNN), where both the power of neural networks in modeling complex functions and the well-established algorithmic theories on variational belief propagation are leveraged.

NSNet: A General Neural Probabilistic Framework for Satisfiability Problems

A general neural framework for solving satisfiability problems as probabilistic inference that outperforms BP and other neural baselines and achieves competitive results compared with the state-of-the-art solvers.

Graph Neural Networks for Propositional Model Counting

This work presents an architecture based on the GNN framework for belief propagation of [15], extended with self-attentive GNN and trained to approximately solve the #SAT problem, showing that this model is able to scale effectively to much larger problem sizes, with comparable or better performances of state of the art approximate solvers.

A visual introduction to Gaussian Belief Propagation

This article presents a visual introduction to Gaussian Belief Propagation, an approximate probabilistic inference algorithm that operates by passing messages between the nodes of arbitrarily structured factor graphs that has the right computational properties to act as a scalable distributed probabilism inference framework for future machine learning systems.

Robust Deep Learning from Crowds with Belief Propagation

A neural-powered Bayesian framework is established, from which deepMF and deepBP are devise with different choice of variational approximation methods, mean field (MF) and belief propagation (BP), respectively, which provides a unified view of existing methods, which are special cases of deepMF with di-erent priors.

Learning Feasibility of Factored Nonlinear Programs in Robotic Manipulation Planning

The model is trained with a dataset of labeled subgraphs of Factored- NLPs, and importantly, can make useful predictions on larger factored nonlinear programs than the ones seen during training, which is important for robotic manipulation planning.

Neural Belief Propagation for Scene Graph Generation

A novel neural belief propagation method that employs a structural Bethe approximation rather than the mean field approximation to infer the associated marginals and achieves the state-of-the-art performance on various popular scene graph generation benchmarks.

Equivariant Neural Network for Factor Graphs

This paper precisely characterize these isomorphic properties of factor graphs and proposes two inference models: FactorEquivariant Neural Belief Propagation (FE-NBP and FE-GNN), a neural network that generalizes BP and respects each of the above properties.

References

SHOWING 1-10 OF 54 REFERENCES

Fast Convergence of Belief Propagation to Global Optima: Beyond Correlation Decay

BP converges quickly to the global optimum of the Bethe free energy for Ising models on arbitrary graphs, as long as the Ising model is \emph{ferromagnetic} (i.e. neighbors prefer to be aligned).

Learning to Pass Expectation Propagation Messages

This work studies whether it is possible to automatically derive fast and accurate EP updates by learning a discriminative model to map EP message inputs to EP message outputs, and provides empirical analysis on several challenging and diverse factors, indicating that there is a space of factors where this approach appears promising.

Learning to Solve NP-Complete Problems - A Graph Neural Network for the Decision TSP

This paper shows that GNNs can learn to solve the decision variant of the Traveling Salesperson Problem (TSP), a highly relevant $\mathcal{NP}$-Complete problem.

Adam: A Method for Stochastic Optimization

This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

On the Hardness of Approximate Reasoning

Constructing free-energy approximations and generalized belief propagation algorithms

This work explains how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms, and describes empirical results showing that GBP can significantly outperform BP.

Learning to Reason: Leveraging Neural Networks for Approximate DNF Counting

This paper proposes a neural model counting approach for weighted #DNF that combines approximate model counting with deep learning, and accurately approximates model counts in linear time when width is bounded.

Hashing-Based Approximate Probabilistic Inference in Hybrid Domains

This work shows how probabilistic inference in hybrid domains can be put within reach of hashing-based WMC solvers and builds on a notion called weighted model integration, which is a strict generalization of WMC.

Graphical Models, Exponential Families, and Variational Inference

The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models.

Amortized Bethe Free Energy Minimization for Learning MRFs

This work optimize a saddle-point objective deriving from the Bethe free energy approximation to the partition function, which requires no sampling, and can be efficiently computed even for very expressive MRFs.
...