# A Stochastic Approximation Method

@inproceedings{Robbins2007ASA, title={A Stochastic Approximation Method}, author={Herbert E. Robbins}, year={2007} }

Let M(x) denote the expected value at level x of the response to a certain experiment. M(x) is assumed to be a monotone function of x but is unknown tot he experiment, and it is desire to find the solution x=0 of the equation M(x) = a, where x is a given constant. we give a method for making successive experiments at levels x1, x2,... in such a way that x, will tend to 0 in probability.

Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

#### Citations

##### Publications citing this paper.

SHOWING 1-10 OF 3,131 CITATIONS

## Knowledge base completion by learning pairwise-interaction differentiated embeddings

VIEW 4 EXCERPTS

CITES METHODS

HIGHLY INFLUENCED

## Progressive Identification of True Labels for Partial-Label Learning

VIEW 6 EXCERPTS

CITES BACKGROUND

HIGHLY INFLUENCED

## SingCubic: Cyclic Incremental Newton-type Gradient Descent with Cubic Regularization for Non-Convex Optimization

VIEW 7 EXCERPTS

CITES METHODS

HIGHLY INFLUENCED

## An Adaptive and Momental Bound Method for Stochastic Learning

VIEW 13 EXCERPTS

CITES BACKGROUND

HIGHLY INFLUENCED

## Diversely Stale Parameters for Efficient Training of CNNs

VIEW 7 EXCERPTS

CITES BACKGROUND

HIGHLY INFLUENCED

## Efficient Bayesian inference for univariate and multivariate non linear state space models with univariate autoregressive state equation

VIEW 5 EXCERPTS

CITES METHODS

HIGHLY INFLUENCED

## Generalized Inner Loop Meta-Learning

VIEW 11 EXCERPTS

CITES BACKGROUND

HIGHLY INFLUENCED

## Generalizing expectation propagation with mixtures of exponential family distributions and an application to Bayesian logistic regression

VIEW 7 EXCERPTS

CITES METHODS

HIGHLY INFLUENCED

## Improved Zeroth-Order Variance Reduced Algorithms and Analysis for Nonconvex Optimization

VIEW 27 EXCERPTS

CITES METHODS

HIGHLY INFLUENCED

## Improving Scalability of Parallel CNN Training by Adjusting Mini-Batch Size at Run-Time

VIEW 11 EXCERPTS

CITES METHODS

HIGHLY INFLUENCED

### FILTER CITATIONS BY YEAR

### CITATION STATISTICS

**402**Highly Influenced Citations**Averaged 469 Citations**per year from 2017 through 2019**46% Increase**in citations per year in 2019 over 2018