• Corpus ID: 245218863

# A Globally Convergent Distributed Jacobi Scheme for Block-Structured Nonconvex Constrained Optimization Problems

@inproceedings{Subramanyam2021AGC,
title={A Globally Convergent Distributed Jacobi Scheme for Block-Structured Nonconvex Constrained Optimization Problems},
author={Anirudh Subramanyam and Youngdae Kim and Michel Schanen and F. Pacaud and Mihai Anitescu},
year={2021}
}
• Published 16 December 2021
• Computer Science
—Motivated by the increasing availability of high- performance parallel computing, we design a distributed parallel algorithm for linearly-coupled block-structured nonconvex con- strained optimization problems. Our algorithm performs Jacobi-type proximal updates of the augmented Lagrangian function, requiring only local solutions of separable block nonlinear programming (NLP) problems. We provide a cheap and explic- itly computable Lyapunov function that allows us to establish global and local…
2 Citations

## Figures and Tables from this paper

• Engineering
ICPP Workshops
• 2022
Maintaining electric power system stability is paramount, especially in extreme contingencies involving unexpected outages of multiple generators or transmission lines that are typical during severe
• Chemistry
• 2021
Articles Targeting Exascale with Julia on GPUs for multiperiod optimization with scenario constraints M Anitescu, K Kim, Y Kim, A Maldonado, F Pacaud, V Rao, M Schanen, S Shin, A Subramanyam. . . . .

## References

SHOWING 1-10 OF 44 REFERENCES

• Computer Science
Optimization and Engineering
• 2021
An extra-layer architecture is adopted to accommodate nonconvexity and handle inequality constraints, and a modified Anderson acceleration is employed for reducing the number of iterations of the proposed algorithm, named ELLADA.
• Computer Science
IEEE Transactions on Automatic Control
• 2017
This is the first work that shows convergence to local minima specifically for a distributed augmented Lagrangian (AL) method applied to nonconvex optimization problems; distributed AL methods are known to perform very well when used to solve convex problems.
• Computer Science
IEEE Transactions on Control of Network Systems
• 2016
Two distributed solution methods that combine the fast convergence properties of augmented Lagrangian-based methods with the separability properties of alternating optimization are investigated and complete convergence of the ADMM for a class of low-dimensional problems is characterized.
• Computer Science
IEEE Transactions on Signal Processing
• 2017
The proposed framework is very general and flexible and unifies several existing successive convex approximation (SCA)-based algorithms and naturally leads to distributed and parallelizable implementations for a large class of nonconvex problems.
• Mathematics, Computer Science
IEEE Trans. Autom. Control.
• 2022
This article considers the distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of local cost functions by using local information exchange, and proposes a distributed zeroth-order algorithm, derived from the considered first- order algorithm by using a deterministic gradient estimator, and shows that it has the same convergence properties.
• Mathematics
ArXiv
• 2020
Different from the traditional (augmented) Lagrangian-based methods which usually require the exact (local) optima at each iteration, the proposed method leverages a proximal linearization-based technique to update the decision variables iteratively, which makes it computationally efficient and viable for the non-linear cases.
• Computer Science
Computational Optimization and Applications
• 2022
A reformulation of the alternating direction method of multipliers (ADMM) is proposed that enables a two-level algorithm, which embeds a specially structured three-block ADMM at the inner level in an augmented Lagrangian method framework, and it is proved the global and local convergence as well as iteration complexity of this new scheme for general nonconvex constrained programs.
• Computer Science
IEEE Transactions on Signal Processing
• 2020
An algorithm named penalty dual decomposition (PDD) is proposed for these difficult problems and its various applications are discussed and its performance is evaluated by customizing it to three applications arising from signal processing and wireless communications.
• Computer Science, Mathematics
Comput. Optim. Appl.
• 2019
This paper considers in this paper some constrained nonconvex optimization models in block decision variables, with or without coupled affine constraints, and shows a sublinear rate of convergence to an ϵ-stationary solution in the form of variational inequality for a generalized conditional gradient method.
• Computer Science, Mathematics
• 2020
It is demonstrated that NAPP-AL converges to a stationary solution at the rate of o(1/\sqrt{k}), where k is the number of iterations, and it is shown that the famous Kurdyka- Lojasiewicz property and the metric subregularity imply the afore-mentioned VP-EB condition.