Efficient Stochastic Gradient Descent for Strongly Convex Optimization

@article{Yang2013EfficientSG,
  title={Efficient Stochastic Gradient Descent for Strongly Convex Optimization},
  author={Tianbao Yang and Lijun Zhang},
  journal={CoRR},
  year={2013},
  volume={abs/1304.5504}
}
We motivate this study from a recent work on a stochastic gradient descent (SGD) method with only one projection (Mahdavi et al., 2012), which aims at alleviating the computational bottleneck of the standard SGD method in performing the projection at each iteration, and enjoys an O(log T/T ) convergence rate for strongly convex optimization. In this paper… CONTINUE READING