Corpus ID: 11856692

On the saddle point problem for non-convex optimization

  title={On the saddle point problem for non-convex optimization},
  author={Razvan Pascanu and Yann Dauphin and S. Ganguli and Yoshua Bengio},
  • Razvan Pascanu, Yann Dauphin, +1 author Yoshua Bengio
  • Published 2014
  • Computer Science
  • ArXiv
  • A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for the ability of these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from… CONTINUE READING
    73 Citations
    A Newton-Based Method for Nonconvex Optimization with Fast Evasion of Saddle Points
    • 21
    • PDF
    Efficiently escaping saddle points on manifolds
    • 23
    • PDF
    Run-and-Inspect Method for nonconvex optimization and global optimality bounds for R-local minimizers
    • 1
    • PDF
    Extending the step-size restriction for gradient descent to avoid strict saddle points
    • PDF
    Convexification and Deconvexification for Training Neural Networks
    • PDF
    Numerically Recovering the Critical Points of a Deep Linear Autoencoder
    • 5
    • PDF
    The Saddle Point Problem of Polynomials
    • 3
    • PDF
    Are saddles good enough for neural networks
    • 1


    An analysis on negative curvature induced by singularity in multi-layer neural-network learning
    • 13
    • Highly Influential
    • PDF
    Newton‐Type Methods
    • 24
    • PDF
    Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
    • 1,016
    • PDF
    Replica Symmetry Breaking Condition Exposed by Random Matrix Calculation of Landscape Complexity
    • 78
    • PDF
    Neural networks and principal component analysis: Learning from examples without local minima
    • 1,184
    • Highly Influential
    Natural gradient descent for on-line learning
    • 77
    On-Line Learning Theory of Soft Committee Machines with Correlated Hidden Units : Steepest Gradient Descent and Natural Gradient Descent
    • 33
    • Highly Influential
    • PDF
    Random Search for Hyper-Parameter Optimization
    • 4,059
    • PDF
    Topmoumoute Online Natural Gradient Algorithm
    • 161
    • PDF
    Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
    • 2,008
    • PDF