# A systematic approach to Lyapunov analyses of continuous-time models in convex optimization

@article{Moucer2022ASA, title={A systematic approach to Lyapunov analyses of continuous-time models in convex optimization}, author={C'eline Moucer and Adrien B. Taylor and Francis R. Bach}, journal={ArXiv}, year={2022}, volume={abs/2205.12772} }

. First-order methods are often analyzed via their continuous-time models, where their worst-case convergence properties are usually approached via Lyapunov functions. In this work, we provide a systematic and principled approach to ﬁnd and verify Lyapunov functions for classes of ordinary and stochastic diﬀerential equations. More precisely, we extend the performance estimation framework, originally proposed by Drori and Teboulle [9], to continuous-time models. We retrieve convergence results…

## References

SHOWING 1-10 OF 44 REFERENCES

Continuous-Time Analysis of Accelerated Gradient Methods via Conservation Laws in Dilated Coordinate Systems

- Computer ScienceICML
- 2022

It is shown that a semi-second-order symplectic Euler discretization in the dilated coordinate system leads to an O (1 /k 2 ) rate on the standard setup of smooth convex minimization, without any further assumptions such as inﬁnite differentiability.

A Lyapunov Analysis of Accelerated Methods in Optimization

- Computer ScienceJ. Mach. Learn. Res.
- 2021

There is an equivalence between the technique of estimate sequences and a family of Lyapunov functions in both continuous and discrete time, which allows for a unified analysis of many existing accelerated algorithms, introduce new algorithms, and strengthen the connection between accelerated algorithms and continuous-time dynamical systems.

The connections between Lyapunov functions for some optimization algorithms and differential equations

- Mathematics, Computer ScienceSIAM J. Numer. Anal.
- 2021

It is shown that the majority of typical discretizations of this ODE, such as the Heavy ball method, do not possess suitable discrete Lyapunov functions, and hence fail to reproduce the desired limiting behaviour of thisODE, which implies that their converge rates when seen as optimization methods cannot behave in an "accerelated" manner.

Potential-Function Proofs for Gradient Methods

- Computer Science, MathematicsTheory Comput.
- 2019

The structure and presentation of these amortized-analysis proofs of convergence for gradient methods based on simple potential-function arguments will be useful as a guiding principle in learning and using these proofs.

Generalized Momentum-Based Methods: A Hamiltonian Perspective

- Computer ScienceSIAM J. Optim.
- 2021

We take a Hamiltonian-based perspective to generalize Nesterov's accelerated gradient descent and Polyak's heavy ball method to a broad class of momentum methods in the setting of (possibly)…

Control System Analysis and Design Via the “Second Method” of Lyapunov: I—Continuous-Time Systems

- Mathematics
- 1960

Anytime Tail Averaging

- Computer ScienceArXiv
- 2019

This work proposes two techniques with a low constant memory cost that perform tail averaging with access to the average at every time step and shows how one can improve the accuracy of that average at the cost of increased memory consumption.

Acceleration via Symplectic Discretization of High-Resolution Differential Equations

- Computer ScienceNeurIPS
- 2019

It is shown that the optimization algorithm generated by applying the symplectic scheme to a high-resolution ODE proposed by Shi et al.