Adaptive Relaxed ADMM: Convergence Theory and Practical Implementation

@article{Xu2017AdaptiveRA,
  title={Adaptive Relaxed ADMM: Convergence Theory and Practical Implementation},
  author={Zheng Xu and M{\'a}rio A. T. Figueiredo and Xiaoming Yuan and Christoph Studer and Tom Goldstein},
  journal={2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2017},
  pages={7234-7243}
}
Many modern computer vision and machine learning applications rely on solving difficult optimization problems that involve non-differentiable objective functions and constraints. [] Key Method We propose an adaptive method that automatically tunes the key algorithm parameters to achieve optimal performance without user oversight. Inspired by recent work on adaptivity, the proposed adaptive relaxed ADMM (ARADMM) is derived by assuming a Barzilai-Borwein style linear gradient. A detailed convergence analysis…

Figures and Tables from this paper

Newton-ADMM: A Distributed GPU-Accelerated Optimizer for Multiclass Classification Problems

This work presents a novel distributed optimizer for classification problems, which integrates a GPU-accelerated Newton-type solver with the global consensus formulation of Alternating Direction of Method Multipliers (ADMM) and significantly outperforms state-of-the-art methods in distributed time to solution.

Alternating Optimization: Constrained Problems, Adversarial Networks, and Robust Models

This dissertation focuses on machine learning problems that can be formulated as a minimax problem in training, and study alternating optimization methods served as fast, scalable, stable and automated solvers, including adaptive ADMM (AADMM), which is a fully automated solver achieving fast practical convergence by adapting the only free parameter in ADMM.

Adaptive Consensus ADMM for Distributed Optimization

A O(1/k) convergence rate is presented for adaptive ADMM methods with node-specific parameters, and an adaptive consensus ADMM (ACADMM) is proposed, which automatically tunes parameters without user oversight.

Convergence analysis of a relaxed inertial alternating minimization algorithm with applications

. The alternating direction method of multipliers (ADMM) is a popular method for solving convex separable minimization problems with linear equality constraints. The generalization of the two-block

Relaxed inertial alternating minimization algorithm for three-block separable convex programming with applications

A variant of three-block AMA is designed, which is derived by employing an inertial extension of the three-operator splitting algorithm to the dual problem to establish the convergence of the proposed algorithm in infinite-dimensional Hilbert spaces.

Relaxed hybrid consensus ADMM for distributed convex optimisation with coupling constraints

The authors offer a reformulation of the original H-ADMM in an operator theoretical framework, which exploits the known relationship between ADMM and Douglas–Rachford splitting, and proposes an adaptive penalty parameter selection scheme that consistently improves the practical convergence properties of the algorithm.

TFPnP: Tuning-free Plug-and-Play Proximal Algorithm with Applications to Inverse Imaging Problems

This work presents a tuning-free PnP proximal algorithm, which can automatically determine the internal parameters including the penalty parameter, the denoising strength and the termination time, and develops a policy network for automatic search of parameters.

Algorithms and software for projections onto intersections of convex and non-convex sets with applications to inverse problems

Results show that the regularization of inverse problems in physical parameter estimation and image processing benefit from working with all available prior information and are not limited to one or two regularizers because of algorithmic, computational, or hyper-parameter selection issues.

A Comprehensive Survey for Low Rank Regularization

Extensive experimental results demonstrate that non-convex regularizers can provide a large advantage over the nuclear norm, the regularizer widely used in solving practical issues.

Unconstrained Proximal Operator: the Optimal Parameter for the Douglas-Rachford Type Primal-Dual Methods

This work proposes an alternative parametrized form of the proximal operator, of which the parameter no longer needs to be positive, and establishes the optimal choice of parameter for the Douglas-Rachford type methods by solving a simple unconstrained optimization problem.

References

SHOWING 1-10 OF 59 REFERENCES

Adaptive ADMM with Spectral Penalty Parameter Selection

The resulting adaptive ADMM (AADMM) algorithm, inspired by the successful Barzilai-Borwein spectral method for gradient descent, yields fast convergence and relative insensitivity to the initial stepsize and problem scaling.

Adaptive Primal-Dual Splitting Methods for Statistical Learning and Image Processing

Self- Adaptive stepsize rules that automatically tune PDHG parameters for optimal convergence are proposed and shown to have strong advantages over non-adaptive methods in terms of both efficiency and simplicity for the user.

An Empirical Study of ADMM for Nonconvex Problems

The experiments suggest that ADMM performs well on a broad class of non-convex problems, and recently proposed adaptive ADMM methods, which automatically tune penalty parameters as the method runs, can improve algorithm efficiency and solution quality compared to ADMM with a non-tuned penalty.

Optimal Parameter Selection for the Alternating Direction Method of Multipliers (ADMM): Quadratic Problems

This paper finds the optimal algorithm parameters that minimize the convergence factor of the ADMM iterates in the context of ℓ2-regularized minimization and constrained quadratic programming.

Fast ADMM Algorithm for Distributed Optimization with Adaptive Penalty

It is shown that the proposed method accelerates the speed of convergence of the ADMM by automatically deciding the constraint penalty needed for parameter consensus in each iteration, and also proposes an extension of the method that adaptively determines the maximum number of iterations to update the penalty.

Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation

A linearized ADM (LADM) method is proposed by linearizing the quadratic penalty term and adding a proximal term when solving the sub-problems, allowing the penalty to change adaptively according to a novel update rule.

Fast Optimization Methods for L1 Regularization: A Comparative Study and Two New Approaches

Two new techniques are proposed based on a smooth (differentiable) convex approximation for the L1 regularizer that does not depend on any assumptions about the loss function used and a new strategy that addresses the non-differentiability of the L 1-regularizer.

Fast Alternating Direction Optimization Methods

This paper considers accelerated variants of two common alternating direction methods: the alternating direction method of multipliers (ADMM) and the alternating minimization algorithm (AMA), of the form first proposed by Nesterov for gradient descent methods.

Linearized alternating direction method with parallel splitting and adaptive penalty for separable convex programs in machine learning

This paper proposes LADM with parallel splitting and adaptive penalty (LADMPSAP) to solve multi-block separable convex programs efficiently and proposes a simple optimality measure and reveals the convergence rate of LADmPSAP in an ergodic sense.

Training Neural Networks Without Gradients: A Scalable ADMM Approach

This paper explores an unconventional training method that uses alternating direction methods and Bregman iteration to train networks without gradient descent steps, and exhibits strong scaling in the distributed setting, yielding linear speedups even when split over thousands of cores.
...