• Corpus ID: 1816482

Distributed Non-Convex ADMM-inference in Large-scale Random Fields

@inproceedings{Mikk2014DistributedNA,
  title={Distributed Non-Convex ADMM-inference in Large-scale Random Fields},
  author={Ondřej Mik{\vs}{\'i}k and Patrick P{\'e}rez and Cesson Śevigńe},
  year={2014}
}
We propose a parallel and distributed algorithm for solving discrete labeling problems in large scale random fields. Our approach is motivated by the following observations: i) very large scale image and video processing problems, such as labeling dozens of million pixels with thousands of labels, are routinely faced in many application domains; ii) the computational complexity of the current state-of-the-art inference algorithms makes them impractical to solve such large scale problems; iii… 
Testing Fine-Grained Parallelism for the ADMM on a Factor-Graph
TLDR
This work proposes a problem-independent scheme of accelerating the Alternating Direction Method of Multipliers that can automatically exploit fine-grained parallelism both in GPUs and shared-memory multi-core computers and achieves significant speedup in such diverse application domains as combinatorial optimization, machine learning, and optimal control.
Discrete-Continuous Splitting for Weakly Supervised Learning
TLDR
A novel algorithm for a class of weakly supervised learning tasks that can learn a classifier from weak supervision that takes the form of hard and soft constraints on the labeling and outperforms hard EM in this task.
Newton-Type Methods for Inference in Higher-Order Markov Random Fields
TLDR
It is shown that it is indeed possible to efficiently apply a trust region Newton method for a broad range of MAP inference problems and a provably globally efficient framework is proposed that includes an excellent compromise between computational complexity and precision concerning the Hessian matrix construction.
Fast ADMM Algorithm for Distributed Optimization with Adaptive Penalty
TLDR
It is shown that the proposed method accelerates the speed of convergence of the ADMM by automatically deciding the constraint penalty needed for parameter consensus in each iteration, and also proposes an extension of the method that adaptively determines the maximum number of iterations to update the penalty.
Global multiview registration using non-convex ADMM
TLDR
An optimization framework for global registration that is based on rank-constrained semidefinite programming is considered, and an interesting finding is that the algorithm is robust to wrong correspondences — it yields high-quality reconstructions even when a significant fraction of the correspondences are corrupted.
An Empirical Study of ADMM for Nonconvex Problems
TLDR
The experiments suggest that ADMM performs well on a broad class of non-convex problems, and recently proposed adaptive ADMM methods, which automatically tune penalty parameters as the method runs, can improve algorithm efficiency and solution quality compared to ADMM with a non-tuned penalty.
CONSENSUS OPTIMIZATION FOR DISTRIBUTED REGISTRATION
  • Rajat Sanyal, K. Chaudhury
  • Computer Science
    2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP)
  • 2018
TLDR
A distributed algorithm based on consensus optimization for the least-squares formulation of the problem of jointly registering multiple point sets using rigid transforms is proposed, able to localize very large networks, which are beyond the scope of most existing localization methods.
Global Convergence of ADMM in Nonconvex Nonsmooth Optimization
TLDR
ADMM might be a better choice than ALM for some nonconvex nonsmooth problems, because ADMM is not only easier to implement, it is also more likely to converge for the concerned scenarios.
...
...

References

SHOWING 1-10 OF 28 REFERENCES
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
TLDR
It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Alternating Directions Dual Decomposition
TLDR
AD3, a new algorithm for approximate maximum a posteriori (MAP) inference on factor graphs based on the alternating directions method of multipliers, leads to a faster consensus than subgradient-based dual decomposition, both theoretically and in practice.
Message-passing for Graph-structured Linear Programs: Proximal Methods and Rounding Schemes
TLDR
A family of super-linearly convergent algorithms for solving linear programming (LP) relaxations, based on proximal minimization schemes using Bregman divergences, and proposes graph-structured randomized rounding schemes applicable to iterative LP-solving algorithms in general.
MRF Energy Minimization and Beyond via Dual Decomposition
TLDR
It is shown that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms, which are able to derive algorithms that generalize and extend state-of-the-art message-passing methods, and take full advantage of the special structure that may exist in particular MRFs.
Parallel and distributed graph cuts by dual decomposition
  • Petter Strandmark, F. Kahl
  • Computer Science
    2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  • 2010
TLDR
This paper solves the maximum flow/minimum cut problem in parallel by splitting the graph into multiple parts and hence, further increase the computational efficacy of graph cuts.
Bethe-ADMM for Tree Decomposition based Parallel MAP Inference
TLDR
This work presents a parallel MAP inference algorithm called Bethe-ADMM based on two ideas: tree-decomposition of the graph and the alternating direction method of multipliers (ADMM), however, unlike the standard ADMM, it uses an inexact ADMM augmented with aBethe-divergence based proximal function, which makes each subproblem in ADMM easy to solve in parallel using the sum-product algorithm.
A Comparative Study of Energy Minimization Methods for Markov Random Fields
TLDR
A set of energy minimization benchmarks, which are used to compare the solution quality and running time of several common energy minimizations algorithms, as well as a general-purpose software interface that allows vision researchers to easily switch between optimization methods with minimal overhead.
An Augmented Lagrangian Approach to Constrained MAP Inference
TLDR
This work proposes a new algorithm for approximate MAP inference on factor graphs, by combining augmented Lagrangian optimization with the dual decomposition method, which is provably convergent, parallelizable, and suitable for fine decompositions of the graph.
An experimental comparison of min-cut/max- flow algorithms for energy minimization in vision
TLDR
The goal of this paper is to provide an experimental comparison of the efficiency of min-cut/max flow algorithms for applications in vision, comparing the running times of several standard algorithms, as well as a new algorithm that is recently developed.
An Alternating Direction Method for Dual MAP LP Relaxation
TLDR
The algorithm, based on the alternating direction method of multipliers (ADMM), is guaranteed to converge to the global optimum of the LP relaxation objective and is competitive with other state-of-the-art algorithms for approximate MAP estimation.
...
...