A Smoothing Stochastic Gradient Method for Composite Optimization

@article{Lin2014ASS,
  title={A Smoothing Stochastic Gradient Method for Composite Optimization},
  author={Qihang Lin},
  journal={Optimization Methods and Software},
  year={2014},
  volume={29},
  pages={1281-1301}
}
We consider the unconstrained optimization problem whose objective function is composed of a smooth and a non-smooth conponents where the smooth component is the expectation a random function. This type of problem arises in some interesting applications in machine learning. We propose a stochastic gradient descent algorithm for this class of optimization problem. When the non-smooth component has a particular structure, we propose another stochastic gradient descent algorithm by incorporating a… CONTINUE READING
8 Citations
26 References
Similar Papers

References

Publications referenced by this paper.
Showing 1-10 of 26 references

On accelerated proximal gradient methods for convex-concave optimization

  • Paul Tseng
  • SIAM Journal on Optimization (Submitted),
  • 2008
Highly Influential
6 Excerpts

Similar Papers

Loading similar papers…