A Globally Convergent Distributed Jacobi Scheme for Block-Structured Nonconvex Constrained Optimization Problems
@inproceedings{Subramanyam2021AGC, title={A Globally Convergent Distributed Jacobi Scheme for Block-Structured Nonconvex Constrained Optimization Problems}, author={Anirudh Subramanyam and Youngdae Kim and Michel Schanen and F. Pacaud and Mihai Anitescu}, year={2021} }
—Motivated by the increasing availability of high- performance parallel computing, we design a distributed parallel algorithm for linearly-coupled block-structured nonconvex con- strained optimization problems. Our algorithm performs Jacobi-type proximal updates of the augmented Lagrangian function, requiring only local solutions of separable block nonlinear programming (NLP) problems. We provide a cheap and explic- itly computable Lyapunov function that allows us to establish global and local…
2 Citations
Frequency Recovery in Power Grids using High-Performance Computing
- EngineeringICPP Workshops
- 2022
Maintaining electric power system stability is paramount, especially in extreme contingencies involving unexpected outages of multiple generators or transmission lines that are typical during severe…
SIAG/OPT Views and News 29-1, 2021
- Chemistry
- 2021
Articles Targeting Exascale with Julia on GPUs for multiperiod optimization with scenario constraints M Anitescu, K Kim, Y Kim, A Maldonado, F Pacaud, V Rao, M Schanen, S Shin, A Subramanyam. . . . .…
References
SHOWING 1-10 OF 44 REFERENCES
Fast and stable nonconvex constrained distributed optimization: the ELLADA algorithm
- Computer ScienceOptimization and Engineering
- 2021
An extra-layer architecture is adopted to accommodate nonconvexity and handle inequality constraints, and a modified Anderson acceleration is employed for reducing the number of iterations of the proposed algorithm, named ELLADA.
On the Convergence of a Distributed Augmented Lagrangian Method for Nonconvex Optimization
- Computer ScienceIEEE Transactions on Automatic Control
- 2017
This is the first work that shows convergence to local minima specifically for a distributed augmented Lagrangian (AL) method applied to nonconvex optimization problems; distributed AL methods are known to perform very well when used to solve convex problems.
On the Convergence of Alternating Direction Lagrangian Methods for Nonconvex Structured Optimization Problems
- Computer ScienceIEEE Transactions on Control of Network Systems
- 2016
Two distributed solution methods that combine the fast convergence properties of augmented Lagrangian-based methods with the separability properties of alternating optimization are investigated and complete convergence of the ADMM for a class of low-dimensional problems is characterized.
Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory
- Computer ScienceIEEE Transactions on Signal Processing
- 2017
The proposed framework is very general and flexible and unifies several existing successive convex approximation (SCA)-based algorithms and naturally leads to distributed and parallelizable implementations for a large class of nonconvex problems.
Linear Convergence of First- and Zeroth-Order Primal-Dual Algorithms for Distributed Nonconvex Optimization
- Mathematics, Computer ScienceIEEE Trans. Autom. Control.
- 2022
This article considers the distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of local cost functions by using local information exchange, and proposes a distributed zeroth-order algorithm, derived from the considered first- order algorithm by using a deterministic gradient estimator, and shows that it has the same convergence properties.
A Proximal Linearization-based Decentralized Method for Nonconvex Problems with Nonlinear Constraints
- MathematicsArXiv
- 2020
Different from the traditional (augmented) Lagrangian-based methods which usually require the exact (local) optima at each iteration, the proposed method leverages a proximal linearization-based technique to update the decision variables iteratively, which makes it computationally efficient and viable for the non-linear cases.
A two-level distributed algorithm for nonconvex constrained optimization
- Computer ScienceComputational Optimization and Applications
- 2022
A reformulation of the alternating direction method of multipliers (ADMM) is proposed that enables a two-level algorithm, which embeds a specially structured three-block ADMM at the inner level in an augmented Lagrangian method framework, and it is proved the global and local convergence as well as iteration complexity of this new scheme for general nonconvex constrained programs.
Penalty Dual Decomposition Method for Nonsmooth Nonconvex Optimization—Part I: Algorithms and Convergence Analysis
- Computer ScienceIEEE Transactions on Signal Processing
- 2020
An algorithm named penalty dual decomposition (PDD) is proposed for these difficult problems and its various applications are discussed and its performance is evaluated by customizing it to three applications arising from signal processing and wireless communications.
Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis
- Computer Science, MathematicsComput. Optim. Appl.
- 2019
This paper considers in this paper some constrained nonconvex optimization models in block decision variables, with or without coupled affine constraints, and shows a sublinear rate of convergence to an ϵ-stationary solution in the form of variational inequality for a generalized conditional gradient method.
A First-Order Primal-Dual Method for Nonconvex Constrained Optimization Based On the Augmented Lagrangian
- Computer Science, Mathematics
- 2020
It is demonstrated that NAPP-AL converges to a stationary solution at the rate of o(1/\sqrt{k}), where k is the number of iterations, and it is shown that the famous Kurdyka- Lojasiewicz property and the metric subregularity imply the afore-mentioned VP-EB condition.