# On the Complexity of Deterministic Nonsmooth and Nonconvex Optimization

@article{Jordan2022OnTC, title={On the Complexity of Deterministic Nonsmooth and Nonconvex Optimization}, author={M.I. Jordan and Tianyi Lin and Manolis Zampetakis}, journal={ArXiv}, year={2022}, volume={abs/2209.12463} }

In this paper, we present several new results on minimizing a nonsmooth and nonconvex function under a Lipschitz condition. Recent work suggests that while the classical notion of Clarke stationarity is computationally intractable up to a suﬃciently small constant tolerance, randomized ﬁrst-order algorithms ﬁnd a ( δ, ǫ )-Goldstein stationary point with the complexity bound of O ( δ − 1 ǫ − 3 ), which is independent of problem dimension [Zhang et al., 2020, Davis et al., 2021, Tian et al., 2022…

## 4 Citations

### The cost of nonconvexity in deterministic nonsmooth optimization

- Mathematics
- 2022

We study the impact of nonconvexity on the complexity of nonsmooth optimization, emphasizing objectives such as piecewise linear functions, which may not be weakly convex. We focus on a…

### Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic Optimization

- Computer Science, Mathematics
- 2023

A more eﬃcient algorithm using stochastic recursive gradient estimator, which improves the complexity to O ( L 3 d 3 / 2 ǫ − 3 + ∆ L 2 d3 / 2 δ − 1 Ǭ − 3 ) .

### On Bilevel Optimization without Lower-level Strong Convexity

- Mathematics, Computer ScienceArXiv
- 2023

This work identifies two classes of growth conditions on the lower-level objective that leads to continuity and proposes the Inexact Gradient-Free Method (IGFM), which can be used to solve the bilevel problem, using an approximate zeroth order oracle that is of independent interest.

### Zero-Sum Stochastic Stackelberg Games

- EconomicsArXiv
- 2022

This paper proves the existence of recursive (i.e., Markov perfect) Stackelberg equilibria (recSE) in zero-sum stochastic games, provides necessary and sufﬁcient conditions for a policy to be a recSE, and shows that recSE can be computed in (weakly) polynomial time via value iteration.

## References

SHOWING 1-10 OF 67 REFERENCES

### On the Complexity of Finding Small Subgradients in Nonsmooth Optimization

- Mathematics, Computer ScienceArXiv
- 2022

It is proved that in general no ﬁnite time algorithm can produce points with small subgradients even for convex functions, and several lower bounds for this task are established which hold for any randomized algorithm, with or without convexity.

### Oracle Complexity in Nonsmooth Nonconvex Optimization

- Computer Science, MathematicsNeurIPS
- 2021

This paper studies nonsmooth nonconvex optimization from an oracle complexity viewpoint, where the algorithm is assumed to be given access only to local information about the function at various points, and proves the most natural relaxation of getting near (cid:15) -stationary points.

### Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization

- Computer ScienceArXiv
- 2022

The relationship between the celebrated Goldstein subdiﬀerential [Goldstein, 1977] and uniform smoothing is established, thereby providing the basis and intuition for the design of gradient-free methods that guarantee the ﬁnite-time convergence to a set of Goldstein stationary points.

### On the Finite-Time Complexity and Practical Computation of Approximate Stationarity Concepts of Lipschitz Functions

- MathematicsICML
- 2022

We report a practical ﬁnite-time algorithmic scheme to compute approximately stationary points for nonconvex nonsmooth Lipschitz functions. In particular, we are interested in two kinds of…

### A gradient sampling method with complexity guarantees for Lipschitz functions in high and low dimensions

- Computer Science
- 2021

This paper shows that both of these assumptions can be dropped by simply adding a small random perturbation in each step of their algorithm, and presents a new cutting plane algorithm that achieves better efficiency in low dimensions: O( dε) for Lipschitz functions and O(dε)for those that are weakly convex.

### A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization

- Computer Science, MathematicsSIAM J. Optim.
- 2005

A practical, robust algorithm to locally minimize such functions as f, a continuous function on $\Rl^n$, which is continuously differentiable on an open dense subset, based on gradient sampling is presented.

### WORST-CASE EVALUATION COMPLEXITY AND OPTIMALITY OF SECOND-ORDER METHODS FOR NONCONVEX SMOOTH OPTIMIZATION

- Computer ScienceProceedings of the International Congress of Mathematicians (ICM 2018)
- 2019

A new general class of inexact second-order algorithms for unconstrained optimization that includes regularization and trust-region variations of Newton's method as well as of their linesearch variants is considered, implying that these methods have optimal worst-case evaluation complexity within a wider class of second- order methods, and that Newton'smethod is suboptimal.

### Convergence of the Gradient Sampling Algorithm for Nonsmooth Nonconvex Optimization

- MathematicsSIAM J. Optim.
- 2007

A slightly revised version of the gradient sampling algorithm of Burke, Lewis, and Overton for minimizing a locally Lipschitz function on $\mathbb{R}^n$ that is continuously differentiable on an open dense subset is introduced.

### Optimization of lipschitz continuous functions

- MathematicsMath. Program.
- 1977

A class of functions called uniformly-locally-convex is introduced that is also tractable, and algorithms for it are sketched.

### Complexity bounds for second-order optimality in unconstrained optimization

- Mathematics, Computer ScienceJ. Complex.
- 2012