• Corpus ID: 11700915

Message-passing algorithms for quadratic minimization

@article{Ruozzi2013MessagepassingAF,
  title={Message-passing algorithms for quadratic minimization},
  author={Nicholas Ruozzi and Sekhar C. Tatikonda},
  journal={J. Mach. Learn. Res.},
  year={2013},
  volume={14},
  pages={2287-2314}
}
Gaussian belief propagation (GaBP) is an iterative algorithm for computing the mean (and variances) of a multivariate Gaussian distribution, or equivalently, the minimum of a multivariate positive definite quadratic function. Sufficient conditions, such as walk-summability, that guarantee the convergence and correctness of GaBP are known, but GaBP may fail to converge to the correct solution given an arbitrary positive definite covariance matrix. As was observed by Malioutov et al. (2006), the… 
Regularized Gaussian Belief Propagation with Nodes of Arbitrary Size
TLDR
It is proved that, given sufficient regularization, this Gaussian belief propagation algorithm will converge and provide the exact marginal means at convergence, regardless of the way variables are assigned to nodes.
On the Convergence of Gaussian Belief Propagation with Nodes of Arbitrary Size
TLDR
It is argued that GaBP-m is robust towards a certain change in variables, a property not shared by iterative solvers of linear systems, such as the conjugate gradient (CG) and preconditioned conjugATE gradient (PCG) methods.
Regularized Gaussian belief propagation
TLDR
This article investigates a regularized BP scheme by focusing on loopy Markov graphs induced by a multivariate Gaussian distribution in canonical form, and shows that the adjusted BP will always converge, with sufficient tuning, while maintaining the exact marginal means.
Convergence rate analysis of Gaussian belief propagation for Markov networks
TLDR
This paper extends this convergence result further by showing that the convergence is exponential under the generalised diagonal dominance condition, and provides a simple bound for the convergence rate.
Convergence of Gaussian Belief Propagation Under General Pairwise Factorization: Connecting Gaussian MRF with Pairwise Linear Gaussian Model
TLDR
The newly established link between Gaussian MRF and pairwise linear Gaussian model reveals an easily verifiable sufficient convergence condition in pairwise LinearGaussian model, which provides a unified criterion for assessing the convergence of Gaussian BP in multiple applications.
Probabilistic graphical models: distributed inference and learning models with small feedback vertex sets
TLDR
Recursion FMP is developed, a purely distributed extension of FMP where all nodes use the same integrated message-passing protocol and the subgraph perturbation sampling algorithm is introduced, which makes use of any pre-existing tractable inference algorithm for a subgraph by perturbing this algorithm so as to yield asymptotically exact samples for the intended distribution.
Making Pairwise Binary Graphical Models Attractive
TLDR
This work proposes a novel scheme that has better convergence properties than BP and provably provides better partition function estimates in many instances than TRBP and explores the properties of this special cover, which can be used to construct an algorithm with the desired properties.
Recursive FMP for distributed inference in Gaussian graphical models
  • Y. Liu, A. Willsky
  • Computer Science
    2013 IEEE International Symposium on Information Theory
  • 2013
TLDR
Recursion FMP is proposed, a purely distributed extension of FMP, where all nodes use the same message-passing protocol and an inference problem on the entire graph is recursively reduced to those on smaller subgraphs in a distributed manner.
Tightness Results for Local Consistency Relaxations in Continuous MRFs
TLDR
It is shown that the messages of LBP can be used to calculate upper and lower bounds on the MAP value, and that these bounds coincide at convergence, yielding a natural stopping criterion which was not previously available.
A Lower Bound on the Partition Function of Attractive Graphical Models in the Continuous Case
  • N. Ruozzi
  • Computer Science, Mathematics
    AISTATS
  • 2017
TLDR
This work provides a graph cover based upper bound for continuous graphical models and uses this characterization (along with a continuous analog of a discrete correlationtype inequality) to show that the Bethe partition function also provides a lower bound on the true partition function of attractive graphical models in the continuous case.
...
1
2
3
...

References

SHOWING 1-10 OF 32 REFERENCES
Fixing convergence of Gaussian belief propagation
TLDR
This paper develops a double-loop algorithm for forcing convergence of GaBP and shows that using this construction, it is able to force convergence of Montanari's linear detection algorithm, in cases where it would originally fail.
Convergent message passing algorithms - a unifying view
TLDR
This paper presents a simple derivation of an abstract algorithm, tree-consistency bound optimization (TCBO) that is provably convergent in both its sum and max product forms, and shows that Wainwright's non-convergent sum-product algorithm for tree based variational bounds, is actually convergent with the right update order.
On the optimality of solutions of the max-product belief-propagation algorithm in arbitrary graphs
TLDR
It is shown that the assignment based on a fixed point of max-product is a "neighborhood maximum" of the posterior probabilities: the posterior probability of the max- product assignment is guaranteed to be greater than all other assignments in a particular large region around that assignment.
Norm-Product Belief Propagation: Primal-Dual Message-Passing for Approximate Inference
TLDR
This paper generalizes the belief propagation algorithms of sum and max-product algorithms and introduces a new set of convergent algorithms based on “convex-free-energy” and linear-programming (LP) relaxation as a zero-temperature of a convex- free-energy.
Correctness of Belief Propagation in Gaussian Graphical Models of Arbitrary Topology
TLDR
This work analyzes belief propagation in networks with arbitrary topologies when the nodes in the graph describe jointly gaussian random variables and gives an analytical formula relating the true posterior probabilities with those calculated using loopy propagation.
MAP estimation via agreement on trees: message-passing and linear programming
TLDR
This work develops and analyze methods for computing provably optimal maximum a posteriori probability (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles and establishes a connection between a certain LP relaxation of the mode-finding problem and a reweighted form of the max-product (min-sum) message-passing algorithm.
Tree consistency and bounds on the performance of the max-product algorithm and its generalizations
TLDR
A novel perspective on the max-product algorithm is provided, based on the idea of reparameterizing the distribution in terms of so-called pseudo-max-marginals on nodes and edges of the graph, to provide conceptual insight into the algorithm in application to graphs with cycles.
MAP estimation via agreement on (hyper)trees: Message-passing and linear programming
TLDR
This work develops and analyzes methods for computing provably optimal MAP configurations for a subclass of Markov random fields defined on graphs with cycles, and establishes a connection between a certain LP relaxation of the mode-finding problem, and a reweighted form of the max-products message-passing algorithm.
Tree-reweighted belief propagation algorithms and approximate ML estimation by pseudo-moment matching
TLDR
A class of local message-passing algorithms is developed, which is called tree-reweighted belief propagation, for efficiently computing the value of these upper bounds on the log partition function of an arbitrary undirected graphical model, as well as the associated pseudomarginals.
Fixing Max-Product: Convergent Message Passing Algorithms for MAP LP-Relaxations
TLDR
A novel message passing algorithm for approximating the MAP problem in graphical models that is derived via block coordinate descent in a dual of the LP relaxation of MAP, but does not require any tunable parameters such as step size or tree weights.
...
1
2
3
4
...