Path-SGD: Path-Normalized Optimization in Deep Neural Networks

@inproceedings{Neyshabur2015PathSGDPO,
  title={Path-SGD: Path-Normalized Optimization in Deep Neural Networks},
  author={Behnam Neyshabur and Ruslan Salakhutdinov and Nathan Srebro},
  booktitle={NIPS},
  year={2015}
}
We revisit the choice of SGD for training deep neural networks by reconsidering the appropriate geometry in which to optimize the weights. We argue for a geometry invariant to rescaling of weights that does not affect the output of the network, and suggest Path-SGD, which is an approximate steepest descent method with respect to a path-wise regularizer related to max-norm regularization. Path-SGD is easy and efficient to implement and leads to empirical gains over SGD and AdaGrad. 
Highly Cited
This paper has 76 citations. REVIEW CITATIONS
Recent Discussions
This paper has been referenced on Twitter 26 times over the past 90 days. VIEW TWEETS
56 Citations
16 References
Similar Papers

Citations

Publications citing this paper.
Showing 1-10 of 56 extracted citations

77 Citations

020402015201620172018
Citations per Year
Semantic Scholar estimates that this publication has 77 citations based on the available data.

See our FAQ for additional information.

Similar Papers

Loading similar papers…