Smooth minimization of non-smooth functions

@article{Nesterov2005SmoothMO,
  title={Smooth minimization of non-smooth functions},
  author={Yurii Nesterov},
  journal={Mathematical Programming},
  year={2005},
  volume={103},
  pages={127-152}
}
  • Y. Nesterov
  • Published 1 May 2005
  • Computer Science
  • Mathematical Programming
Abstract.In this paper we propose a new approach for constructing efficient schemes for non-smooth convex optimization. It is based on a special smoothing technique, which can be applied to functions with explicit max-structure. Our approach can be considered as an alternative to black-box minimization. From the viewpoint of efficiency estimates, we manage to improve the traditional bounds on the number of iterations of the gradient schemes from keeping basically the complexity of each… 
SMOOTH CONVEX APPROXIMATION AND ITS APPLICATIONS
In this thesis, we consider a smooth convex approximation to the sum of the κ largest components. To make it applicable to a wide class of applications, the study is conducted on some minmax
Unconstrained Convex Minimization in Relative Scale
TLDR
A new approach to constructing schemes for unconstrained convex minimization, which compute approximate solutions with a certain relative accuracy by using a structural model of the objective function to employ the efficient smoothing technique.
Unconstrained Convex Minimization in Relative Scale
  • Y. Nesterov
  • Mathematics, Computer Science
    Math. Oper. Res.
  • 2009
TLDR
A new approach to constructing schemes for unconstrained convex minimization is presented, which computes approximate solutions with a certain relative accuracy using a structural model of the objective function and the efficient smoothing technique.
Universal gradient methods for convex optimization problems
TLDR
New methods for black-box convex minimization are presented, which demonstrate that the fast rate of convergence, typical for the smooth optimization problems, sometimes can be achieved even on nonsmooth problem instances.
Penalty and Smoothing Methods for Convex Semi-Infinite Programming
TLDR
This paper introduces a unified framework concerning Remez-type algorithms and integral methods coupled with penalty and smoothing methods that subsumes well-known classical algorithms, but also provides some new methods with interesting properties.
An introduction to non-smooth convex analysis via multiplicative derivative
  • A. Tor
  • Mathematics
    Journal of Taibah University for Science
  • 2019
In this study, *-directional derivative and *-subgradient are defined using the multiplicative derivative, making a new contribution to non-Newtonian calculus for use in non-smooth analysis. As for
Application of a Smoothing Technique to Decomposition in Convex Optimization
TLDR
A new decomposition method is derived, called ldquoproximal center algorithm,rdquo which from the viewpoint of efficiency estimates improves the bounds on the number of iterations of the classical dual gradient scheme by an order of magnitude.
Smoothing techniques and difference of convex functions algorithms for image reconstructions
TLDR
Characterizations of differentiability for real-valued functions based on generalized differentiation provide the mathematical foundation for Nesterov's smoothing techniques in infinite dimensions and provide a simple approach to image reconstructions based on Nesterovo's smoothers and algorithms for minimizing differences of convex functions that involve the regularization.
A double smoothing technique for solving unconstrained nondifferentiable convex optimization problems
TLDR
An efficient algorithm for solving a class of unconstrained nondifferentiable convex optimization problems in finite dimensional spaces by regularizing its Fenchel dual problem in two steps into a differentiable strongly convex one with Lipschitz continuous gradient.
Barrier smoothing for nonsmooth convex minimization
TLDR
While the barrier smoothing approach maintains the sublinear-convergence rate, it affords a new analytic step size, which significantly enhances the practical convergence of the gradient method as compared to proximity smoothing.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 24 REFERENCES
Convex analysis and minimization algorithms
IX. Inner Construction of the Subdifferential.- X. Conjugacy in Convex Analysis.- XI. Approximate Subdifferentials of Convex Functions.- XII. Abstract Duality for Practitioners.- XIII. Methods of
Lectures on modern convex optimization - analysis, algorithms, and engineering applications
TLDR
The authors present the basic theory of state-of-the-art polynomial time interior point methods for linear, conic quadratic, and semidefinite programming as well as their numerous applications in engineering.
Nonlinear rescaling vs. smoothing technique in convex optimization
  • R. Polyak
  • Mathematics, Computer Science
    Math. Program.
  • 2002
TLDR
An alternative to the smoothing technique approach for constrained optimization by using the modification for Nonlinear Rescaling the constraints of a given constrained optimization problem into an equivalent set of constraints to establish global quadratic convergence of the NR methods for Linear Programming with unique dual solution.
On the Bertsekas' method for minimization of composite functions
Most conventional methods of minimizing nondifferentiable functions (for instance, the subgradient method) are applicable to functions of “general form”. Nevertheless, a technique involving
Introductory Lectures on Convex Optimization - A Basic Course
TLDR
It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization, and it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments.
An Introduction to Optimization
  • E. Chong, S. Żak
  • Computer Science
    IEEE Antennas and Propagation Magazine
  • 1996
TLDR
This review discusses mathematics, linear programming, and set--Constrained and Unconstrained Optimization, as well as methods of Proof and Some Notation, and problems with Equality Constraints.
On convergence rates of subgradient optimization methods
Rates of convergence of subgradient optimization are studied. If the step size is chosen to be a geometric progression with ratioρ the convergence, if it occurs, is geometric with rateρ. For
The Author Would like to Thank
The term " macroprudential " : origins and evolution 1 In the wake of the recent financial crisis, the term " macroprudential " has become a true buzzword. A core element of international efforts to
Convex Analysis and Minimization Algorithms , vols. I and II
  • Convex Analysis and Minimization Algorithms , vols. I and II
  • 1993
Introductory Lectures on Convex Optimization. to be published by Kluwer
  • Introductory Lectures on Convex Optimization. to be published by Kluwer
...
1
2
3
...