• Corpus ID: 235795303

Better SGD using Second-order Momentum

@inproceedings{Tran2021BetterSU,
  title={Better SGD using Second-order Momentum},
  author={Hoang Tran and Ashok Cutkosky},
  year={2021}
}
We develop a new algorithm for non-convex stochastic optimization that finds an -critical point in the optimal O( −3) stochastic gradient and Hessian-vector product computations. Our algorithm uses Hessian-vector products to “correct” a bias term in the momentum of SGD with momentum. This leads to better gradient estimates in a manner analogous to variance reduction methods. In contrast to prior work, we do not require excessively large batch sizes, and are able to provide an adaptive algorithm… 

Figures and Tables from this paper

Adaptive Momentum-Based Policy Gradient with Second-Order Information

This work proposes a variance-reduced policy-gradient method, called SHARP, which incorporates second- order information into stochastic gradient descent (SGD) using momentum with a time-varying learning rate.

Momentum Aggregation for Private Non-convex ERM

An improved sensitivity analysis of stochastic gradient descent on smooth objectives that exploits the recurrence of examples in different epochs and provides an differential private algorithm that proves the previous best gradient bound of O.

References

Adaptive Bound Optimization for Online Convex Optimization

This work introduces a new online convex optimization algorithm that adaptively chooses its regularization function based on the loss functions observed so far, and proves competitive guarantees that show the algorithm provides a bound within a constant factor of the best possible bound in hindsight in hindsight.