Testing Fine-Grained Parallelism for the ADMM on a Factor-Graph

@article{Hao2016TestingFP,
  title={Testing Fine-Grained Parallelism for the ADMM on a Factor-Graph},
  author={Ning Hao and Amirreza Oghbaee and Mohammad Rostami and Nate Derbinsky and Jos{\'e} Bento},
  journal={2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)},
  year={2016},
  pages={835-844}
}
  • Ning Hao, A. Oghbaee, José Bento
  • Published 8 March 2016
  • Computer Science
  • 2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)
There is an ongoing effort to develop tools that apply distributed computational resources to tackle large problems or reduce the time to solve them. [] Key Method We show that this scheme, an interpretation of the ADMM as a message-passing algorithm on a factor-graph, can automatically exploit fine-grained parallelism both in GPUs and shared-memory multi-core computers and achieves significant speedup in such diverse application domains as combinatorial optimization, machine learning, and optimal control…

Figures from this paper

Solving Fused Penalty Estimation Problems via Block Splitting Algorithms
  • T. Yen
  • Mathematics
    Journal of Computational and Graphical Statistics
  • 2019
TLDR
A method for solving a penalized estimation problem in which the penalty function is a function of differences between pairs of parameter vectors is proposed, which introduces a set of equality constraints that connect each parameter vector to a group of auxiliary variables.
Tractable n-Metrics for Multiple Graphs
TLDR
A new family of multi-distances (a distance between more than two elements) that satisfies a generalization of the properties of metrics to multiple elements and can be relaxed to convex optimization problems, without losing the generalized metric property.
Efficient Projection onto the Perfect Phylogeny Model
TLDR
This paper uses Moreau’s decomposition for proximal operators, and a tree reduction scheme, to develop a new algorithm to compute a projection problem that assigns a fitness cost to phylogenetic trees.
Efficient Projection onto the Perfect Phylogeny Model
TLDR
This paper uses Moreau’s decomposition for proximal operators, and a tree reduction scheme, to develop a new algorithm to compute a projection problem that assigns a fitness cost to phylogenetic trees.
Efficient Projection onto the Perfect Phylogeny Model
TLDR
This paper uses Moreau's decomposition for proximal operators, and a tree reduction scheme, to develop a new algorithm to compute a projection problem that assigns a fitness cost to phylogenetic trees.
A family of tractable graph metrics
TLDR
A broad family of graph distances is defined, that includes both the chemical and the Chartrand-Kubiki-Shultz distances, and it is proved that these are all metrics that are tractable.
A metric for sets of trajectories that is practical and mathematically consistent
TLDR
The notion of closeness is the first demonstrating the following three features: the metric can be quickly computed, incorporates confusion of trajectories' identity in an optimal way, and is a metric in the mathematical sense.
Using Task Descriptions in Lifelong Machine Learning for Improved Performance and Zero-Shot Transfer
TLDR
This work develops a lifelong learning method based on coupled dictionary learning that utilizes high-level task descriptions to model the inter-task relationships and shows that using task descriptors improves the performance of the learned task policies.
Exact inference under the perfect phylogeny model
TLDR
ExACT is a tool that can explore the space of all possible phylogenetic trees, and performs exact inference under the PPM with noisy data, and allows users to obtain not just the most-likely tree for some input data, but exact statistics about the distribution of trees that might explain the data.
Estimating Cellular Goals from High-Dimensional Biological Data
TLDR
The first approach to estimating constraint reactions from data that can scale to realistically large metabolic models is developed and recovered, enabling accurate prediction of metabolic states in hundreds of growth environments not seen in training data.
...
...

References

SHOWING 1-10 OF 33 REFERENCES
Distributed Non-Convex ADMM-inference in Large-scale Random Fields
TLDR
This work proposes a parallel and distributed algorithm for solving discrete labeling problems in large scale random fields using a tree-based decomposition of the original optimization problem which is solved using a non convex form of the method of alternating direction method of multipliers (ADMM).
Distributed Non-convex ADMM-based inference in large-scale random fields
TLDR
This work proposes a parallel and distributed algorithm for solving discrete labeling problems in large scale random fields using a tree-based decomposition of the original optimization problem which is solved using a non convex form of the method of alternating direction method of multipliers (ADMM).
GPU computing in discrete optimization. Part II: Survey focused on routing problems
TLDR
A tutorial style introduction to modern PC architectures and GPU programming and a broad survey of the literature on parallel computing in discrete optimization targeted at modern PCs, with special focus on routing problems are given.
MPC Toolbox with GPU Accelerated Optimization Algorithms
TLDR
It is demonstrated that using GPUs for solving MPC problems can provide a speedup in solution time, and a case study is presented in which GPUs are utilized for a Linear Programming Interior Point Method to solve a test case where a series of power plants must be controlled to minimize the cost of power production.
An efficient GPU implementation of the revised simplex method
TLDR
This paper presents an efficient GPU implementation of a very popular algorithm for linear programming, the revised simplex method, and describes how to carry out the steps of the revisedsimplex method to take full advantage of the parallel processing capabilities of a GPU.
SnapVX: A Network-Based Convex Optimization Solver
TLDR
SnapVX is a high-performance solver for convex optimization problems defined on networks that combines the capabilities of two open source software packages: Snap.py and CVXPY.
Debunking the 100X GPU vs. CPU myth: an evaluation of throughput computing on CPU and GPU
TLDR
This paper discusses optimization techniques for both CPU and GPU, analyzes what architecture features contributed to performance differences between the two architectures, and recommends a set of architectural features which provide significant improvement in architectural efficiency for throughput kernels.
D-ADMM: A Communication-Efficient Distributed Algorithm for Separable Optimization
TLDR
D-ADMM is proven to converge when the network is bipartite or when all the functions are strongly convex, although in practice, convergence is observed even when these conditions are not met.
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
TLDR
It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
...
...