Phase transitions in optimal betting strategies

@article{Dinis2020PhaseTI,
  title={Phase transitions in optimal betting strategies},
  author={L. Dinis and J. Unterberger and D. Lacoste},
  journal={EPL},
  year={2020},
  volume={131},
  pages={60005}
}
L. Dinis1, J. Unterberger2 and D. Lacoste3 1 GISC Grupo Interdisciplinar de Sistemas Complejos and Dpto. de Estructura de la Materia, FÃŋsica TÃľrmica y ElectrÃşnica, Universidad Complutense de Madrid, 28040 Spain 2 Institut Elie Cartan, UMR CNRS 7502, Université de Lorraine, BP 239 F-54506 Vandoeuvre-lès-Nancy Cedex, France 3 Gulliver Laboratory, UMR CNRS 7083, PSL Research University, ESPCI, 10 rue Vauquelin, F-75231 Paris Cedex 05, France 
2 Citations

Figures from this paper

Universal constraints on selection strength in lineage trees
We obtain general inequalities constraining the difference between the average of an arbitrary function of a phenotypic trait, which includes the fitness landscape of the trait itself, in theExpand
Universal constraints on selection strength in lineage trees
We obtain general inequalities constraining the difference between the average of an arbitrary function of a phenotypic trait, which includes the fitness landscape of the trait itself, in theExpand

References

SHOWING 1-6 OF 6 REFERENCES
The Kelly Capital Growth Investment Criterion: Theory and Practice
This volume provides the definitive treatment of fortune's formula or the Kelly capital growth criterion as it is often called. The strategy is to maximize long run wealth of the investor byExpand
A Case Study of Thermodynamic Bounds for Chemical Kinetics
In this chapter, we illustrate recently obtained thermodynamic bounds for a number of enzymatic networks by focusing on simple examples of unicyclic or multi-cyclic networks. We also deriveExpand
Optimization Methods for Large-Scale Machine Learning
TLDR
A major theme of this study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter, leading to a discussion about the next generation of optimization methods for large- scale machine learning. Expand
Fortune's formula (Hill and Wang
  • 2005
See Supplemental Material for details on simulations, on the exact solution for two horses, and on the analysis of the Pareto front