Simple Evolutionary Optimization Can Rival Stochastic Gradient Descent in Neural Networks

@inproceedings{Morse2016SimpleEO,
  title={Simple Evolutionary Optimization Can Rival Stochastic Gradient Descent in Neural Networks},
  author={Gregory Morse and Kenneth O. Stanley},
  booktitle={GECCO},
  year={2016}
}
While evolutionary algorithms (EAs) have long offered an alternative approach to optimization, in recent years backpropagation through stochastic gradient descent (SGD) has come to dominate the fields of neural network optimization and deep learning. One hypothesis for the absence of EAs in deep learning is that modern neural networks have become so high dimensional that evolution with its inexact gradient cannot match the exact gradient calculations of backpropagation. Furthermore, the… CONTINUE READING
Highly Cited
This paper has 26 citations. REVIEW CITATIONS

Citations

Publications citing this paper.
Showing 1-10 of 18 extracted citations

References

Publications referenced by this paper.
Showing 1-3 of 3 references

Lecture 6.5-RMSprop: Divide the gradient by a running average of its recent magnitude

  • T. Tieleman, G. Hinton
  • COURSERA: Neural Networks for Machine Learning,
  • 2012
Highly Influential
15 Excerpts

Similar Papers

Loading similar papers…