Complexity of Inexact Proximal Point Algorithm for minimizing convex functions with Holderian Growth
@inproceedings{Ptracu2021ComplexityOI, title={Complexity of Inexact Proximal Point Algorithm for minimizing convex functions with Holderian Growth}, author={Andrei Pătraşcu and Paul Irofti}, year={2021} }
Several decades ago the Proximal Point Algorithm (PPA) started to gain a long-lasting attraction for both abstract operator theory and numerical optimization communities. Even in modern applications, researchers still use proximal minimization theory to design scalable algorithms that overcome nonsmoothness. Remarkable works as [9,4,5,51] established tight relations between the convergence behaviour of PPA and the regularity of the objective function. In this manuscript we derive nonasymptotic…
54 References
An adaptive proximal point algorithm framework and application to large-scale optimization
- Computer Science, Mathematics
- 2020
An adaptive generalized proximal point algorithm (AGPPA), which adaptively updates the proximal regularization parameters based on some implementable criteria, and it is shown that AGPPA achieves linear convergence without any knowledge of the error bound condition parameter, and the rate only differs from the optimal one by a logarithm term.
Error bounds for proximal point subproblems and associated inexact proximal point algorithms
- MathematicsMath. Program.
- 2000
A new merit function is proposed for proximal point subproblems associated with the variational inequality problem, based on Burachik-Iusem-Svaiter’s concept of ε-enlargement of a maximal monotone operator, which preserves all the desirable global and local convergence properties of the classical exact/inexact method.
On Convergence Rates of Linearized Proximal Algorithms for Convex Composite Optimization with Applications
- Mathematics, Computer ScienceSIAM J. Optim.
- 2016
Under the assumptions of local weak sharp minima of order $p$ ($p \in [1,2]$) and a quasi-regularity condition, a local superlinear convergence rate is established for the linearized proximal algorithm (LPA).
Faster subgradient methods for functions with Hölderian growth
- Mathematics, Computer ScienceMath. Program.
- 2020
This manuscript derives new convergence results for several subgradient methods applied to minimizing nonsmooth convex functions with Hölderian growth and develops an adaptive variant of the “descending stairs” stepsize which achieves the same convergence rate without requiring an error bound constant which is difficult to estimate in practice.
Inexact accelerated high-order proximal-point methods
- Mathematics, Computer Science
- 2020
A new framework of Bi-Level Unconstrained Minimization (BLUM) for development of accelerated methods in Convex Programming, and presents new methods with the exact auxiliary search procedure, which have the rate of convergence O(k^{−(3p+1)/2}), where p ≥ 1 is the order of the proximal operator.
Inexact and accelerated proximal point algorithms
- Mathematics, Computer Science
- 2011
It is shown that the convergence rate of the exact accelerated algorithm 1/k2 can be recovered by constraining the errors to be of a certain type, and the strategy proposed in [14] for generating estimate sequences according to the definition of Nesterov is generalized.
Proximal point algorithm, Douglas-Rachford algorithm and alternating projections: a case study
- Mathematics, Computer Science
- 2015
The findings suggest that the Douglas-Rachford algorithm outperforms the method of alternating projections in the absence of constraint qualifications, and the exact asymptotic rates of convergence are studied.
From error bounds to the complexity of first-order descent methods for convex functions
- MathematicsMath. Program.
- 2017
It is shown that error bounds can be used as effective tools for deriving complexity results for first-order descent methods in convex minimization and how KL inequalities can in turn be employed to compute new complexity bounds for a wealth of descent methods for convex problems.
Monotone Operators and the Proximal Point Algorithm
- Mathematics
- 1976
For the problem of minimizing a lower semicontinuous proper convex function f on a Hilbert space, the proximal point algorithm in exact form generates a sequence $\{ z^k \} $ by taking $z^{k + 1} $…
Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
- Computer ScienceJ. Mach. Learn. Res.
- 2017
This paper gives practical guidelines to use Catalyst and presents a comprehensive theoretical analysis of its global complexity, showing that Catalyst applies to a large class of algorithms, including gradient descent, block coordinate descent, incremental algorithms such as SAG, SAGA, SDCA, SVRG, Finito/MISO and their proximal variants.