Corpus ID: 119153196

Tail bounds for stochastic approximation

@article{Friedlander2013TailBF,
  title={Tail bounds for stochastic approximation},
  author={Michael P. Friedlander and Gabriel Goh},
  journal={arXiv: Optimization and Control},
  year={2013}
}
  • Michael P. Friedlander, Gabriel Goh
  • Published 2013
  • Mathematics
  • arXiv: Optimization and Control
  • Stochastic-approximation gradient methods are attractive for large-scale convex optimization because they offer inexpensive iterations. They are especially popular in data-fitting and machine-learning applications where the data arrives in a continuous stream, or it is necessary to minimize large sums of functions. It is known that by appropriately decreasing the variance of the error at each iteration, the expected rate of convergence matches that of the underlying deterministic gradient… CONTINUE READING

    Figures from this paper.

    Citations

    Publications citing this paper.
    SHOWING 1-6 OF 6 CITATIONS

    A Proximal Stochastic Gradient Method with Progressive Variance Reduction

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Extragradient Method with Variance Reduction for Stochastic Variational Inequalities

    VIEW 5 EXCERPTS
    CITES METHODS & BACKGROUND
    HIGHLY INFLUENCED

    Parallelizing sparse recovery algorithms: A stochastic approach

    • A. Shah, A. Majumdar
    • Computer Science
    • 2014 19th International Conference on Digital Signal Processing
    • 2014
    VIEW 2 EXCERPTS
    CITES BACKGROUND

    Accelerating low-rank matrix completion on GPUs

    • A. Shah, A. Majumdar
    • Computer Science
    • 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI)
    • 2014

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 27 REFERENCES

    Approximation accuracy, gradient methods, and error bound for structured convex optimization

    • Paul Tseng
    • Computer Science, Mathematics
    • Math. Program.
    • 2010
    VIEW 1 EXCERPT
    HIGHLY INFLUENTIAL

    Hybrid Deterministic-Stochastic Methods for Data Fitting

    VIEW 10 EXCERPTS

    A Stochastic Approximation Method

    VIEW 2 EXCERPTS

    Incremental Gradient Algorithms with Stepsizes Bounded Away from Zero

    • M. Solodov
    • Mathematics, Computer Science
    • Comput. Optim. Appl.
    • 1998
    VIEW 1 EXCERPT

    Error bounds and convergence analysis of feasible descent methods: a general approach

    VIEW 8 EXCERPTS
    HIGHLY INFLUENTIAL