• Corpus ID: 235795303

Better SGD using Second-order Momentum

@inproceedings{Tran2021BetterSU,
  title={Better SGD using Second-order Momentum},
  author={Hoang Tran and Ashok Cutkosky},
  year={2021}
}
We develop a new algorithm for non-convex stochastic optimization that finds an -critical point in the optimal O( −3) stochastic gradient and Hessian-vector product computations. Our algorithm uses Hessian-vector products to “correct” a bias term in the momentum of SGD with momentum. This leads to better gradient estimates in a manner analogous to variance reduction methods. In contrast to prior work, we do not require excessively large batch sizes, and are able to provide an adaptive algorithm… 

Figures and Tables from this paper

Adaptive Momentum-Based Policy Gradient with Second-Order Information

TLDR
This work proposes a variance-reduced policy-gradient method, called SHARP, which incorporates second- order information into stochastic gradient descent (SGD) using momentum with a time-varying learning rate.

References

Adaptive Bound Optimization for Online Convex Optimization

TLDR
This work introduces a new online convex optimization algorithm that adaptively chooses its regularization function based on the loss functions observed so far, and proves competitive guarantees that show the algorithm provides a bound within a constant factor of the best possible bound in hindsight in hindsight.