Skip to search formSkip to main contentSkip to account menu

Gradient descent

Known as: Descent, Gradient descent optimization, Gradient descent method 
Gradient descent is a first-order iterative optimization algorithm. To find a local minimum of a function using gradient descent, one takes steps… 
Wikipedia (opens in a new tab)

Papers overview

Semantic Scholar uses AI to extract papers important to this topic.
Highly Cited
2018
Highly Cited
2018
We provide a detailed study on the implicit bias of gradient descent when optimizing loss functions with strictly monotone tails… 
Highly Cited
2017
Highly Cited
2017
Deep learning models are often successfully trained using gradient descent, despite the worst case hardness of the underlying non… 
Highly Cited
2016
Highly Cited
2016
We show that gradient descent converges to a local minimizer, almost surely with random initialization. This is proved by… 
Highly Cited
2015
Highly Cited
2015
Optimization problems with rank constraints arise in many applications, including matrix regression, structured PCA, matrix… 
Highly Cited
2014
Highly Cited
2014
First-order methods play a central role in large-scale machine learning. Even though many variations exist, each suited to a… 
Highly Cited
2011
Highly Cited
2011
Designing distributed algorithms that converge quickly to an equilibrium is one of the foremost research goals in algorithmic… 
2010
2010
We introduced an algorithm for unconstrained optimization based on the transformation of the Newton method with the line search… 
Highly Cited
2009
Highly Cited
2009
We present an algorithm for finding an <i>s</i>-sparse vector <i>x</i> that minimizes the <i>square-error</i> ∥<i>y</i> -- Φ<i>x… 
Highly Cited
2007
Highly Cited
2007
Much recent attention, both experimental and theoretical, has been focussed on classii-cation algorithms which produce voted… 
Highly Cited
2004
Highly Cited
2004
A generalized normalized gradient descent (GNGD) algorithm for linear finite-impulse response (FIR) adaptive filters is…