A nonsmooth primal-dual method with simultaneous adaptive PDE constraint solver

@article{Jensen2022ANP,
  title={A nonsmooth primal-dual method with simultaneous adaptive PDE constraint solver},
  author={Bj{\o}rn Jensen and Tuomo Valkonen},
  journal={ArXiv},
  year={2022},
  volume={abs/2211.04807}
}
We introduce an ecient rst-order primal-dual method for the solution of nonsmooth PDE-constrained optimization problems. We achieve this eciency through not solving the PDE or its linearisation on each iteration of the optimization method. Instead, we run the method in parallel with a simple conventional linear system solver (Jacobi, Gauss–Seidel, conjugate gradients), always taking only one step of the linear system solver for each step of the optimization method. The control parameter is… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 30 REFERENCES

Primal-Dual Extragradient Methods for Nonlinear Nonsmooth PDE-Constrained Optimization

The applicability of the accelerated algorithm to examples of inverse problems with $L^1$- and $L$-fitting terms as well as of state-constrained optimal control problems, where convergence can be guaranteed after introducing an (arbitrary small, still nonsmooth) Moreau--Yosida regularization is shown.

Simultaneous Pseudo-Timestepping for PDE-Model Based Optimization Problems

  • S. HazraV. Schulz
  • Computer Science, Mathematics
    Universität Trier, Mathematik/Informatik, Forschungsbericht
  • 2002
A new method for the solution of optimization problems with PDE constraints based on simultaneous pseudo-time stepping for evolution equations, which uses a preconditioner derived from the continuous reduced SQP method.

bilevel optimization with single-step inner methods

We propose a new approach to solving bilevel optimization problems, intermediate between solving full-system optimality conditions with a Newton-type approach, and treating the inner problem as an

Introduction to Nonsmooth Analysis and Optimization

These notes aim to give an introduction to generalized derivative concepts useful in deriving necessary optimality conditions and numerical algorithms for infinite-dimensional nondifferentiable

Simultaneous single-step one-shot optimization with unsteady PDEs

The Primal-Dual Active Set Strategy as a Semismooth Newton Method

The notion of slant differentiability is recalled and it is argued that the $\max$-function is slantly differentiable in Lp-spaces when appropriately combined with a two-norm concept, which leads to new local convergence results of the primal-dual active set strategy.

Semismooth Newton Methods for Variational Inequalities and Constrained Optimization Problems in Function Spaces

  • M. Ulbrich
  • Mathematics
    MOS-SIAM Series on Optimization
  • 2011
The author covers adjoint-based derivative computation and the efficient solution of Newton systems by multigrid and preconditioned iterative methods.

Primal–Dual Proximal Splitting and Generalized Conjugation in Non-smooth Non-convex Optimization

We demonstrate that difficult non-convex non-smooth optimization problems, such as Nash equilibrium problems and anisotropic as well as isotropic Potts segmentation models, can be written in terms of

One-Shot Approaches to Design Optimzation

The paper outlines the close relations between a fixed point solver based piggy back approach and a Reduced SQP method in Jacobi and Seidel variants and shows that the retardation factor between simulation and optimization is bounded below by 2.

Testing and Non-linear Preconditioning of the Proximal Point Method

  • T. Valkonen
  • Computer Science
    Applied Mathematics & Optimization
  • 2018
This work formalises common arguments in convergence rate and convergence proofs of optimisation methods to the verification of a simple iteration-wise inequality and demonstrates the effectiveness of the general approach on several classical algorithms, as well as their stochastic variants.