Autour de L'Usage des gradients en apprentissage statistique. (Around the Use of Gradients in Machine Learning)

@inproceedings{Mass2017AutourDL,
  title={Autour de L'Usage des gradients en apprentissage statistique. (Around the Use of Gradients in Machine Learning)},
  author={Pierre-Yves Mass{\'e}},
  year={2017}
}
We prove a local convergence theorem for the classical dynamical system optimisation algorithm called RTRL, in a non linear setting. The RTRL works on line, but maintains a huge amount of information, which makes it unfit to train even moderately large learning models. The “NoBackTrack”algorithm turns it by replacing these informations by a non biased, low dimension, random approximation. We also prove the convergence with arbitrarily close to one probability, of this algorithm to the local… CONTINUE READING

Citations

Publications citing this paper.

References

Publications referenced by this paper.
SHOWING 1-10 OF 38 REFERENCES

Oh the humanity! Poker computer trounces humans in big step for AI

  • Olivia Solon
  • In : The Guardian (jan
  • 2017

L’apprentissage profond : une révolution en intelligence artificielle

  • Yann LeCun
  • Leçon inaugurale au Collège de France, disponible…
  • 2016

Rapp

  • Aymeric Dieuleveut, Nicolas Flammarion et Francis Bach. Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
  • tech.
  • 2016

The Go Files: AI computer wraps up 4-1 victory against human champion

  • Tanguy Chouard
  • In : Nature (mar
  • 2016

Ergodicity and speed of convergence to equilibrium for diffusion processes

  • Eva Löcherbach
  • Cours disponible sur la page web de l’auteur, à l…
  • 2015

Gradientbased Hypermarameter Optimization through Reversible Learning

  • Douglas Maclaurin, David Duvenaud et Ryan Adams
  • : Proceedings of The 32nd International…
  • 2015

Similar Papers

Loading similar papers…