Reinforcement Learning Algorithm for Mixed Mean Field Control Games
@inproceedings{Angiuli2022ReinforcementLA, title={Reinforcement Learning Algorithm for Mixed Mean Field Control Games}, author={Andrea Angiuli and Nils Detering and Jean-Pierre Fouque and Jimin Lin}, year={2022} }
We present a new combined Mean Field Control Game (MFCG) problem which can be interpreted as a competitive game between collaborating groups and its solution as a Nash equilibrium between the groups. Within each group the players coordinate their strategies. An example of such a situation is a modification of the classical trader’s problem. Groups of traders maximize their wealth. They are faced with transaction cost for their own trades and a cost for their own terminal position. In addition…
3 Citations
Learning Mean Field Games: A Survey
- Computer ScienceArXiv
- 2022
A general framework for classical iterative methods (based on best-response computation or policy evaluation) to solve Mean Field Games in an exact way is presented and how RL can be used to learn MFG solutions in a model-free way is explained.
Reinforcement Learning for Intra-and-Inter-Bank Borrowing and Lending Mean Field Control Game
- Economics
- 2022
We propose a mean field control game model for the intra-and-inter-bank borrowing and lending problem. This framework allows to study the competitive game arising between groups of collaborative…
Markov Decision Processes under Model Uncertainty
- Mathematics, Computer ScienceArXiv
- 2022
It turns out that in scenarios where the market is volatile or bearish, the optimal portfolio strategies from the corresponding robust optimization problem outperforms the ones without model uncertainty, showcasing the importance of taking model uncertainty into account.
References
SHOWING 1-10 OF 28 REFERENCES
Reinforcement Learning for Mean Field Games, with Applications to Economics
- EconomicsArXiv
- 2021
A two timescale approach with RL for MFG and MFC, which relies on a unified Q-learning algorithm to simultaneously update an action-value function and a distribution but with different rates, in a model-free fashion.
Unified reinforcement Q-learning for mean field game and control problems
- Computer ScienceMathematics of Control, Signals, and Systems
- 2022
A Reinforcement Learning (RL) algorithm to solve infinite horizon asymptotic Mean Field Game (MFG) and Mean Field Control (MFC) problems is presented, described as a unified two-timescale Mean Field Q-learning.
Reinforcement Learning in Stationary Mean-field Games
- Computer ScienceAAMAS
- 2019
This paper studies reinforcement learning in a specific class of multi-agent systems systems called mean-field games, and presents two reinforcement learning algorithms that converge to the right solution under mild technical conditions.
On the Convergence of Model Free Learning in Mean Field Games
- Computer ScienceAAAI
- 2020
This paper analyzes in full generality the convergence of a fictitious iterative scheme using any single agent learning algorithm at each step of the Mean Field MAS, and shows for the first time convergence of model free learning algorithms towards non-stationary MFG equilibria.
Mean Field Multi-Agent Reinforcement Learning
- Computer ScienceICML
- 2018
Existing multi-agent reinforcement learning methods are limited typically to a small number of agents. When the agent number increases largely, the learning becomes intractable due to the curse of…
Linear-Quadratic Mean-Field Reinforcement Learning: Convergence of Policy Gradient Methods
- Computer ScienceArXiv
- 2019
This work proves rigorously the convergence of exact and model-free policy gradient methods in a mean-field linear-quadratic setting and provides graphical evidence of the convergence based on implementations of these algorithms.
Model-Free Mean-Field Reinforcement Learning: Mean-Field MDP and Mean-Field Q-Learning
- Computer Science, MathematicsArXiv
- 2019
This work introduces generic model-free algorithms based on the state-action value function at the mean field level and proves convergence for a prototypical Q-learning method for mean field control problems.
Deep Fictitious Play for Finding Markovian Nash Equilibrium in Multi-Agent Games
- Computer ScienceMSML
- 2020
A deep neural network-based algorithm is proposed to identify the Markovian Nash equilibrium of general large large-player stochastic differential games and finds the approximate Nash equilibrium accurately, which, to the best knowledge, is difficult to achieve by other numerical algorithms.
Learning in Mean-Field Games
- Computer ScienceIEEE Transactions on Automatic Control
- 2014
ADP techniques for design and adaptation (learning) of approximately optimal control laws for this model are introduced and a parameterization is proposed, based on an analysis of the mean-field PDE model for the game.
Dynamic Programming for Mean-Field Type Control
- MathematicsJ. Optim. Theory Appl.
- 2016
A Hamilton–Jacobi–Bellman fixed-point algorithm is compared to a steepest descent method issued from calculus of variations and an extended Bellman’s principle is derived by a different argument.