On the Complexity of Deterministic Nonsmooth and Nonconvex Optimization

@article{Jordan2022OnTC,
  title={On the Complexity of Deterministic Nonsmooth and Nonconvex Optimization},
  author={M.I. Jordan and Tianyi Lin and Manolis Zampetakis},
  journal={ArXiv},
  year={2022},
  volume={abs/2209.12463}
}
In this paper, we present several new results on minimizing a nonsmooth and nonconvex function under a Lipschitz condition. Recent work suggests that while the classical notion of Clarke stationarity is computationally intractable up to a sufficiently small constant tolerance, randomized first-order algorithms find a ( δ, ǫ )-Goldstein stationary point with the complexity bound of O ( δ − 1 ǫ − 3 ), which is independent of problem dimension [Zhang et al., 2020, Davis et al., 2021, Tian et al., 2022… 

Tables from this paper

The cost of nonconvexity in deterministic nonsmooth optimization

We study the impact of nonconvexity on the complexity of nonsmooth optimization, emphasizing objectives such as piecewise linear functions, which may not be weakly convex. We focus on a

Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic Optimization

A more efficient algorithm using stochastic recursive gradient estimator, which improves the complexity to O ( L 3 d 3 / 2 ǫ − 3 + ∆ L 2 d3 / 2 δ − 1 Ǭ − 3 ) .

On Bilevel Optimization without Lower-level Strong Convexity

This work identifies two classes of growth conditions on the lower-level objective that leads to continuity and proposes the Inexact Gradient-Free Method (IGFM), which can be used to solve the bilevel problem, using an approximate zeroth order oracle that is of independent interest.

Zero-Sum Stochastic Stackelberg Games

This paper proves the existence of recursive (i.e., Markov perfect) Stackelberg equilibria (recSE) in zero-sum stochastic games, provides necessary and sufficient conditions for a policy to be a recSE, and shows that recSE can be computed in (weakly) polynomial time via value iteration.

References

SHOWING 1-10 OF 67 REFERENCES

On the Complexity of Finding Small Subgradients in Nonsmooth Optimization

It is proved that in general no finite time algorithm can produce points with small subgradients even for convex functions, and several lower bounds for this task are established which hold for any randomized algorithm, with or without convexity.

Oracle Complexity in Nonsmooth Nonconvex Optimization

This paper studies nonsmooth nonconvex optimization from an oracle complexity viewpoint, where the algorithm is assumed to be given access only to local information about the function at various points, and proves the most natural relaxation of getting near (cid:15) -stationary points.

Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization

The relationship between the celebrated Goldstein subdifferential [Goldstein, 1977] and uniform smoothing is established, thereby providing the basis and intuition for the design of gradient-free methods that guarantee the finite-time convergence to a set of Goldstein stationary points.

On the Finite-Time Complexity and Practical Computation of Approximate Stationarity Concepts of Lipschitz Functions

We report a practical finite-time algorithmic scheme to compute approximately stationary points for nonconvex nonsmooth Lipschitz functions. In particular, we are interested in two kinds of

A gradient sampling method with complexity guarantees for Lipschitz functions in high and low dimensions

This paper shows that both of these assumptions can be dropped by simply adding a small random perturbation in each step of their algorithm, and presents a new cutting plane algorithm that achieves better efficiency in low dimensions: O( dε) for Lipschitz functions and O(dε)for those that are weakly convex.

A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization

A practical, robust algorithm to locally minimize such functions as f, a continuous function on $\Rl^n$, which is continuously differentiable on an open dense subset, based on gradient sampling is presented.

WORST-CASE EVALUATION COMPLEXITY AND OPTIMALITY OF SECOND-ORDER METHODS FOR NONCONVEX SMOOTH OPTIMIZATION

A new general class of inexact second-order algorithms for unconstrained optimization that includes regularization and trust-region variations of Newton's method as well as of their linesearch variants is considered, implying that these methods have optimal worst-case evaluation complexity within a wider class of second- order methods, and that Newton'smethod is suboptimal.

Convergence of the Gradient Sampling Algorithm for Nonsmooth Nonconvex Optimization

A slightly revised version of the gradient sampling algorithm of Burke, Lewis, and Overton for minimizing a locally Lipschitz function on $\mathbb{R}^n$ that is continuously differentiable on an open dense subset is introduced.

Optimization of lipschitz continuous functions

A class of functions called uniformly-locally-convex is introduced that is also tractable, and algorithms for it are sketched.
...