Deep Attentive Belief Propagation: Integrating Reasoning and Learning for Solving Constraint Optimization Problems

@article{Deng2022DeepAB,
  title={Deep Attentive Belief Propagation: Integrating Reasoning and Learning for Solving Constraint Optimization Problems},
  author={Yanchen Deng and Shufeng Kong and Caihua Liu and Bo An},
  journal={ArXiv},
  year={2022},
  volume={abs/2209.12000}
}
Belief Propagation (BP) is an important message-passing algorithm for various reasoning tasks over graphical models, including solving the Constraint Optimization Problems (COPs). It has been shown that BP can achieve state-of-the-art performance on various benchmarks by mixing old and new messages before sending the new one, i.e., damping . However, existing methods of tuning a static damping factor for BP not only are laborious but also harm their performance. Moreover, existing BP algorithms… 

References

SHOWING 1-10 OF 53 REFERENCES

Belief Propagation Neural Networks

By training BPNN-D, a learned iterative operator that provably maintains many of the desirable properties of BP for any choice of the parameters, BPNNs learns to perform the task better than the original BP: it converges 1.7x faster on Ising models while providing tighter bounds.

Norm-Product Belief Propagation: Primal-Dual Message-Passing for Approximate Inference

This paper generalizes the belief propagation algorithms of sum and max-product algorithms and introduces a new set of convergent algorithms based on “convex-free-energy” and linear-programming (LP) relaxation as a zero-temperature of a convex- free-energy.

Neural Enhanced Belief Propagation on Factor Graphs

This work proposes a new hybrid model that runs conjointly a FG-GNN with belief propagation and applies the ideas to error correction decoding tasks, and shows that the algorithm can outperform belief propagation for LDPC codes on bursty channels.

Beyond Trees: Analysis and Convergence of Belief Propagation in Graphs with Multiple Cycles

This work extends the theory on the behavior of belief propagation in general – and Max-sum specifically – when solving problems represented by graphs with multiple cycles, and proves that when the algorithm is applied to adjacent symmetric cycles, the use of a large enough damping factor guarantees convergence to the optimal solution.

Governing convergence of Max-sum on DCOPs through damping and splitting

Fractional Belief Propagation

Fractional belief propagation is formulated in terms of a family of approximate free energies, which includes the Bethe free energy and the naive mean-field free as special cases, and using the linear response correction of the clique marginals, the scale parameters can be tuned.

Neural Regret-Matching for Distributed Constraint Optimization Problems

This paper tackles the limitation by incorporating deep neural networks in solving DCOPs for the first time and presents a neural context-based sampling scheme built upon regret-matching, and theoretically shows the regret bound of the algorithm.

Pretrained Cost Model for Distributed Constraint Optimization Problems

The model, GAT-PCM, is pretrained with optimally labelled data in an offline manner to construct effective heuristics to boost a broad range of DCOP algorithms where evaluating the quality of a partial assignment is critical, such as local search or backtracking search.

Attention, Learn to Solve Routing Problems!

A model based on attention layers with benefits over the Pointer Network is proposed and it is shown how to train this model using REINFORCE with a simple baseline based on a deterministic greedy rollout, which is more efficient than using a value function.

Constructing free-energy approximations and generalized belief propagation algorithms

This work explains how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms, and describes empirical results showing that GBP can significantly outperform BP.
...