# Riemannian stochastic approximation algorithms

@inproceedings{Karimi2022RiemannianSA, title={Riemannian stochastic approximation algorithms}, author={Mohammad Reza Karimi and Ya-Ping Hsieh and P. Mertikopoulos and Andreas Krause}, year={2022} }

. We examine a wide class of stochastic approximation algorithms for solving (stochastic) nonlinear problems on Riemannian manifolds. Such algorithms arise naturally in the study of Riemannian optimization, game theory and optimal transport, but their behavior is much less understood compared to the Euclidean case because of the lack of a global linear structure on the manifold. We overcome this diﬃculty by introducing a suitable Fermi coordinate frame which allows us to map the asymptotic…

## References

SHOWING 1-10 OF 64 REFERENCES

### Convergence Analysis of Riemannian Stochastic Approximation Schemes

- Mathematics, Computer ScienceArXiv
- 2020

This paper analyzes the convergence for a large class of Riemannian stochastic approximation (SA) schemes, which aim at tackling Stochastic optimization problems, and studies biased SA schemes.

### Riemannian proximal gradient methods

- MathematicsMathematical Programming
- 2021

In the Euclidean setting the proximal gradient method and its accelerated variants are a class of efficient algorithms for optimization problems with decomposable objective. In this paper, we develop…

### Averaging Stochastic Gradient Descent on Riemannian Manifolds

- Computer Science, MathematicsCOLT
- 2018

A geometric framework is developed to transform a sequence of slowly converging iterates generated from stochastic gradient descent on a Riemannian manifold to an averaged iterate sequence with a robust and fast $O(1/n)$ convergence rate.

### On Riemannian Stochastic Approximation Schemes with Fixed Step-Size

- MathematicsAISTATS
- 2021

This paper studies fixed step-size stochastic approximation (SA) schemes, including Stochastic gradient schemes, in a Riemannian framework, and proves the asymptotic rate of this convergence is established, through an asymPTotic expansion of the bias, and a central limit theorem.

### Escaping from saddle points on Riemannian manifolds

- Mathematics, Computer ScienceNeurIPS
- 2019

This work shows that a perturbed version of Riemannian gradient descent algorithm converges to a second-order stationary point (and hence is able to escape saddle points on the manifold) and is the first to prove such a rate for nonconvex, manifold-constrained problems.

### An accelerated first-order method for non-convex optimization on manifolds

- Mathematics, Computer ScienceArXiv
- 2020

The first gradient methods on Riemannian manifolds to achieve accelerated rates in the non-convex case are described and precise claims to that effect are proved, with explicit constants, which affects the worst-case complexity bounds for the optimization algorithms.

### Stochastic Gradient Descent on Riemannian Manifolds

- Computer Science, MathematicsIEEE Transactions on Automatic Control
- 2013

This paper develops a procedure extending stochastic gradient descent algorithms to the case where the function is defined on a Riemannian manifold and proves that, as in the Euclidian case, the gradient descent algorithm converges to a critical point of the cost function.

### Efficiently escaping saddle points on manifolds

- Computer ScienceNeurIPS
- 2019

Smooth, non-convex optimization problems on Riemannian manifolds occur in machine learning as a result of orthonormality, rank or positivity constraints. First- and second-order necessary optimality…

### No-regret Online Learning over Riemannian Manifolds

- Mathematics, Computer ScienceNeurIPS
- 2021

A universal lower bound for the achievable regret is found by constructing an online convex optimization problem on Hadamard manifolds by establishing upper bounds on the regrets of the problem with respect to time horizon, manifold curvature, and manifold dimension.

### Curvature-Dependant Global Convergence Rates for Optimization on Manifolds of Bounded Geometry

- Mathematics, Computer Science
- 2020

We give curvature-dependant convergence rates for the optimization of weakly convex functions defined on a manifold of 1-bounded geometry via Riemannian gradient descent and via the dynamic…