Convergence of the Iterates in Mirror Descent Methods

@article{Doan2019ConvergenceOT,
  title={Convergence of the Iterates in Mirror Descent Methods},
  author={Thinh T. Doan and Subhonmesh Bose and Dinh Hoa Nguyen and Carolyn L. Beck},
  journal={IEEE Control Systems Letters},
  year={2019},
  volume={3},
  pages={114-119}
}
We consider centralized and distributed mirror descent (MD) algorithms over a finite-dimensional Hilbert space, and prove that the problem variables converge to an optimizer of a possibly nonsmooth function when the step sizes are square summable but not summable. Prior literature has focused on the convergence of the function value to its optimum. However, applications from distributed optimization and learning in games require the convergence of the variables to an optimizer, which is… CONTINUE READING
3
Twitter Mentions

Similar Papers

References

Publications referenced by this paper.
SHOWING 1-10 OF 25 REFERENCES

Stochastic Optimization From Distributed Streaming Data in Rate-Limited Networks

  • IEEE Transactions on Signal and Information Processing over Networks
  • 2019
VIEW 1 EXCERPT

Distributed Lagrangian methods for network resource allocation

  • 2017 IEEE Conference on Control Technology and Applications (CCTA)
  • 2017
VIEW 1 EXCERPT

Mirror descent learning in continuous games

  • 2017 IEEE 56th Annual Conference on Decision and Control (CDC)
  • 2017
VIEW 1 EXCERPT

Distributed learning with infinitely many hypotheses

  • 2016 IEEE 55th Conference on Decision and Control (CDC)
  • 2016
VIEW 1 EXCERPT