Skip to search form
Skip to main content
Skip to account menu
Semantic Scholar
Semantic Scholar's Logo
Search 205,659,019 papers from all fields of science
Search
Sign In
Create Free Account
Rate of convergence
Known as:
R-linear
, Q-linear
, Speed of convergence
Expand
In numerical analysis, the speed at which a convergent sequence approaches its limit is called the rate of convergence. Although strictly speaking, a…
Expand
Wikipedia
Create Alert
Alert
Related topics
Related topics
37 relations
Aitken's delta-squared process
Bisection method
Brent's method
CMA-ES
Expand
Papers overview
Semantic Scholar uses AI to extract papers important to this topic.
Highly Cited
2020
Highly Cited
2020
On the Convergence of FedAvg on Non-IID Data
Xiang Li
,
Kaixuan Huang
,
Wenhao Yang
,
Shusen Wang
,
Zhihua Zhang
ICLR
2020
Corpus ID: 195798643
Federated learning enables a large amount of edge computing devices to jointly learn a model without data sharing. As a leading…
Expand
Highly Cited
2012
Highly Cited
2012
On the O(1/n) Convergence Rate of the Douglas-Rachford Alternating Direction Method
B. He
,
Xiaoming Yuan
SIAM J. Numer. Anal.
2012
Corpus ID: 10461880
Alternating direction methods (ADMs) have been well studied in the literature, and they have found many efficient applications in…
Expand
Highly Cited
2012
Highly Cited
2012
A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets
Nicolas Le Roux
,
Mark W. Schmidt
,
F. Bach
NIPS
2012
Corpus ID: 1084204
We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly…
Expand
Highly Cited
2011
Highly Cited
2011
Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
Mark W. Schmidt
,
Nicolas Le Roux
,
F. Bach
NIPS
2011
Corpus ID: 11262278
We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal…
Expand
Highly Cited
2010
Highly Cited
2010
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization
John C. Duchi
,
Elad Hazan
,
Y. Singer
J. Mach. Learn. Res.
2010
Corpus ID: 538820
We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in…
Expand
Highly Cited
2009
Highly Cited
2009
Cuckoo Search via Lévy flights
Xin-She Yang
,
S. Deb
World Congress on Nature & Biologically Inspired…
2009
Corpus ID: 206491725
In this paper, we intend to formulate a new meta-heuristic algorithm, called Cuckoo Search (CS), for solving optimization…
Expand
Highly Cited
2004
Highly Cited
2004
Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
A. Nemirovski
SIAM J. Optim.
2004
Corpus ID: 11145819
We propose a prox-type method with efficiency estimate $O(\epsilon^{-1})$ for approximating saddle points of convex-concave C$^{1…
Expand
Highly Cited
2001
Highly Cited
2001
ON CONVEX SOBOLEV INEQUALITIES AND THE RATE OF CONVERGENCE TO EQUILIBRIUM FOR FOKKER-PLANCK TYPE EQUATIONS
A. Arnold
,
P. Markowich
,
G. Toscani
,
A. Unterreiter
2001
Corpus ID: 12460798
It is well known that the analysis of the large-time asymptotics of Fokker-Planck type equations by the entropy method is closely…
Expand
Highly Cited
1993
Highly Cited
1993
A scaled conjugate gradient algorithm for fast supervised learning
M. Møller
Neural Networks
1993
Corpus ID: 8029054
Highly Cited
1986
Highly Cited
1986
Convergence of an annealing algorithm
M. Lundy
,
A. Mees
Math. Program.
1986
Corpus ID: 20977861
The annealing algorithm is a stochastic optimization method which has attracted attention because of its success with certain…
Expand
By clicking accept or continuing to use the site, you agree to the terms outlined in our
Privacy Policy
,
Terms of Service
, and
Dataset License
ACCEPT & CONTINUE