• Corpus ID: 3032454

An Improved Three-Weight Message-Passing Algorithm

@article{Derbinsky2013AnIT,
  title={An Improved Three-Weight Message-Passing Algorithm},
  author={Nate Derbinsky and Jos{\'e} Bento and Veit Elser and Jonathan S. Yedidia},
  journal={ArXiv},
  year={2013},
  volume={abs/1305.1961}
}
We describe how the powerful "Divide and Concur" algorithm for constraint satisfaction can be derived as a special case of a message-passing version of the Alternating Direction Method of Multipliers (ADMM) algorithm for convex optimization, and introduce an improved message-passing algorithm based on ADMM/DC by introducing three distinct weights for messages, with "certain" and "no opinion" weights, as well as the standard weight used in ADMM/DC. The "certain" messages allow our improved… 

Figures and Tables from this paper

Proactive Message Passing on Memory Factor Networks
TLDR
This work introduces a new type of graphical model that is called a "memory factor network" (MFN) and introduces an associated message-passing style algorithm called "proactive message passing"' (PMP) that performs inference on MFNs.
Proactive Message Passing on Memory Factor Networks Proactive Message Passing on Memory Factor Networks
TLDR
An associated message-passing style algorithm called “proactive message passing” (PMP) that performs inference on MFNs is introduced that comes with convergence guarantees and is efficient in comparison to competing algorithms such as variants of belief propagation.
Proximal Algorithms
TLDR
The many different interpretations of proximal operators and algorithms are discussed, their connections to many other topics in optimization and applied mathematics are described, some popular algorithms are surveyed, and a large number of examples of proxiesimal operators that commonly arise in practice are provided.
Distributed Optimization, Averaging via ADMM, and Network Topology
TLDR
Different algorithms when applied to a canonical distributed averaging consensus problem are compared and interesting connections between ADMM and the lifted Markov chains are shown besides providing an explicit characterization of its convergence and optimal parameter tuning in terms of spectral properties of the network.
How is Distributed ADMM Affected by Network Topology
TLDR
A full characterization of the convergence of distributed over-relaxed ADMM for the same type of consensus problem in terms of the topology of the underlying graph is provided and a proof of the aforementioned conjecture is shown it is valid for any graph, even the ones whose random walks cannot be accelerated via Markov chain lifting.
A simple effective heuristic for embedded mixed-integer quadratic programming
TLDR
A fast optimization algorithm for approximately minimizing convex quadratic functions over the intersection of affine and separable constraints (i.e., the Cartesian product of possibly nonconvex real sets) that is based on a variation of the alternating direction method of multipliers (ADMM).
A simple effective heuristic for embedded mixed-integer quadratic programming
TLDR
A fast optimisation algorithm for approximately minimising convex quadratic functions over the intersection of affine and separable constraints (i.e. the Cartesian product of possibly nonconvex real sets) that is based on a variation of the alternating direction method of multipliers (ADMM).
Testing Fine-Grained Parallelism for the ADMM on a Factor-Graph
TLDR
This work proposes a problem-independent scheme of accelerating the Alternating Direction Method of Multipliers that can automatically exploit fine-grained parallelism both in GPUs and shared-memory multi-core computers and achieves significant speedup in such diverse application domains as combinatorial optimization, machine learning, and optimal control.
ADMM decoding of error correction codes: From geometries to algorithms
TLDR
This paper reviews the ADMM formulation and geometries involved in the subroutines, and presents a linear time algorithm for projecting onto an ℓ1 ball with box constraints.
A general system for heuristic minimization of convex functions over non-convex sets
TLDR
This work describes an implementation of general heuristics to approximately solve a wide variety of problems with convex objective and decision variables from a non-convex set in a package called NCVX, as an extension of CVXPY, a Python package for formulating and solving convex optimization problems.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 32 REFERENCES
Message-Passing Algorithms for Inference and Optimization
TLDR
The Divide and Concur algorithm is a projection-based constraint satisfaction algorithm that deals naturally with continuous variables, and converges to exact answers for problems where the solution sets of the constraints are convex.
Message-Passing Algorithms for Inference and Optimization “ Belief Propagation ” and “ Divide and Concur ”
TLDR
The Divide and Concur algorithm is a projection-based constraint satisfaction algorithm that deals naturally with continuous variables, and converges to exact answers for problems where the solution sets of the constraints are convex.
Understanding belief propagation and its generalizations
TLDR
It is shown that BP can only converge to a fixed point that is also a stationary point of the Bethe approximation to the free energy, which enables connections to be made with variational approaches to approximate inference.
Efficient belief propagation for early vision
TLDR
New algorithmic techniques are presented that substantially improve the running time of the belief propagation approach and reduce the complexity of the inference algorithm to be linear rather than quadratic in the number of possible labels for each pixel.
Divide and Concur and Difference-Map BP Decoders for LDPC Codes
TLDR
It is shown that this “difference-map belief propagation” (DMBP) decoder has dramatically improved error-floor performance compared to standard BP decoders, while maintaining a similar computational complexity.
Constructing free-energy approximations and generalized belief propagation algorithms
TLDR
This work explains how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms, and describes empirical results showing that GBP can significantly outperform BP.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
TLDR
It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Sudoku as a Constraint Problem
TLDR
This paper tries to understand the puzzle from a constraint point of view, shows models to solve and generate puzzles and gives an objective measure of the difficulty of a puzzle instance.
Codes on graphs: normal realizations
  • G. Forney
  • Computer Science
    2000 IEEE International Symposium on Information Theory (Cat. No.00CH37060)
  • 2000
TLDR
This work shows that any Wiberg-type realization may be put into normal form without essential change in its graph or its decoding complexity, and shows that an appropriately defined dual of a group or linear normal realization realizes the dual group orlinear code.
Decomposition methods for large scale LP decoding
TLDR
The key enabling technical result is a nearly linear time algorithm for two-norm projection onto the parity polytope that allows us to use LP decoding, with all its theoretical guarantees, to decode large-scale error correcting codes efficiently.
...
1
2
3
4
...