Further properties of the forward–backward envelope with applications to difference-of-convex programming

@article{Liu2017FurtherPO,
  title={Further properties of the forward–backward envelope with applications to difference-of-convex programming},
  author={Tianxiang Liu and Ting Kei Pong},
  journal={Computational Optimization and Applications},
  year={2017},
  volume={67},
  pages={489-520}
}
In this paper, we further study the forward–backward envelope first introduced in Patrinos and Bemporad (Proceedings of the IEEE Conference on Decision and Control, pp 2358–2363, 2013) and Stella et al. (Comput Optim Appl, doi:10.1007/s10589-017-9912-y, 2017) for problems whose objective is the sum of a proper closed convex function and a twice continuously differentiable possibly nonconvex function with Lipschitz continuous gradient. We derive sufficient conditions on the original problem for… 
Forward-Backward Envelope for the Sum of Two Nonconvex Functions: Further Properties and Nonmonotone Linesearch Algorithms
TLDR
It is shown that the forward-backward envelope (FBE), an exact and strictly continuous penalty function for the original cost, still enjoys favorable first- and second-order properties which are key for the convergence results of ZeroFPR.
Forward–backward quasi-Newton methods for nonsmooth optimization problems
TLDR
This work proposes an algorithmic scheme that enjoys the same global convergence properties of FBS when the problem is convex, or when the objective function possesses the Kurdyka–Łojasiewicz property at its critical points, and analysis of superlinear convergence is based on an extension of the Dennis and Moré theorem.
A proximal difference-of-convex algorithm with extrapolation
TLDR
A proximal difference-of-convex algorithm with extrapolation to possibly accelerate the proximal DCA, and it is shown that any cluster point of the sequence generated by the algorithm is a stationary points of the DC optimization problem for a fairly general choice of extrapolation parameters.
Proximal envelopes: Smooth optimization algorithms for nonsmooth problems
TLDR
An interpretation to proximal algorithms as unconstrained gradient methods over an associated function function is provided, and proximal envelopes provide a link between nonsmooth and smooth optimization, and allow for the application of more efficient and robust smooth optimization algorithms to the solution of nonsmoot, possibly constrained problems.
Retraction-based first-order feasible sequential quadratic programming methods for difference-of-convex programs with smooth inequality and simple geometric constraints
In this paper, we propose first-order feasible sequential quadratic programming (SQP) methods for difference-of-convex (DC) programs with smooth inequality and simple geometric constraints. Different
Bregman forward-backward splitting for nonconvex composite optimization: superlinear convergence to nonisolated critical points
We introduce Bella, a locally superlinearly convergent Bregman forward-backward splitting method for minimizing the sum of two nonconvex functions, one of which satisfying a relative smoothness
The modified second APG method for DC optimization problems
TLDR
A variant of the second accelerated proximal gradient method introduced by Nesterov and Auslender and Teboulle for solving the minimization of DC functions (difference of two convex functions) is constructed.
Proximal Gradient Algorithms under Local Lipschitz Gradient Continuity: A Convergence and Robustness Analysis of PANOC
Composite optimization offers a powerful modeling tool for a variety of applications and is often numerically solved by means of proximal gradient methods. In this paper, we consider fully nonconvex
An inexact successive quadratic approximation method for a class of difference-of-convex optimization problems
In this paper, we propose a new method for a class of difference-of-convex (DC) optimization problems, whose objective is the sum of a smooth function and a possibly nonprox-friendly DC function. The
Newton-Type Alternating Minimization Algorithm for Convex Optimization
TLDR
Experiments show that using limited-memory directions in NAMA greatly improves the convergence speed over AMA and its accelerated variant, and the proposed method is well suited for embedded applications and large-scale problems.
...
1
2
3
4
...

References

SHOWING 1-10 OF 40 REFERENCES
Forward–backward quasi-Newton methods for nonsmooth optimization problems
TLDR
This work proposes an algorithmic scheme that enjoys the same global convergence properties of FBS when the problem is convex, or when the objective function possesses the Kurdyka–Łojasiewicz property at its critical points, and analysis of superlinear convergence is based on an extension of the Dennis and Moré theorem.
Calculus of the Exponent of Kurdyka–Łojasiewicz Inequality and Its Applications to Linear Convergence of First-Order Methods
TLDR
The Kurdyka–Łojasiewicz exponent is studied, an important quantity for analyzing the convergence rate of first-order methods, and various calculus rules are developed to deduce the KL exponent of new (possibly nonconvex and nonsmooth) functions formed from functions with known KL exponents.
Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods
TLDR
This work proves an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance, that guarantees the convergence of bounded sequences under the assumption that the function f satisfies the Kurdyka–Łojasiewicz inequality.
Semi-Smooth Second-order Type Methods for Composite Convex Programs
The goal of this paper is to study approaches to bridge the gap between first-order and second-order type methods for composite convex programs. Our key observations are: i) Many well-known operator
A coordinate gradient descent method for nonsmooth separable minimization
TLDR
A (block) coordinate gradient descent method for solving this class of nonsmooth separable problems and establishes global convergence and, under a local Lipschitzian error bound assumption, linear convergence for this method.
Minimization of ℓ1-2 for Compressed Sensing
TLDR
A sparsity oriented simulated annealing procedure with non-Gaussian random perturbation is proposed and the almost sure convergence of the combined algorithm (DCASA) to a global minimum is proved.
Penalty Methods for a Class of Non-Lipschitz Optimization Problems
TLDR
A penalty method whose subproblems are solved via a nonmonotone proximal gradient method with a suitable update scheme for the penalty parameters is discussed, and the convergence of the algorithm to a KKT point of the constrained problem is proved.
Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Lojasiewicz Inequality
TLDR
A convergent proximal reweighted l1 algorithm for compressive sensing and an application to rank reduction problems is provided, which depends on the geometrical properties of the function L around its critical points.
A unified approach to error bounds for structured convex optimization problems
TLDR
A new framework for establishing error bounds for a class of structured convex optimization problems, in which the objective function is the sum of a smooth convex function and a general closed proper convexfunction, is presented.
The Moreau envelope function and proximal mapping in the sense of the Bregman distance
Abstract In this paper, we explore some properties of the Moreau envelope function e λ f ( x ) of f and the associated proximal mapping P λ f ( x ) in the sense of the Bregman distance induced by a
...
1
2
3
4
...