Corpus ID: 56482327

Double Deep Q-Learning for Optimal Execution

@article{Ning2018DoubleDQ,
  title={Double Deep Q-Learning for Optimal Execution},
  author={Brian Ning and Franco Ho Ting Ling and Sebastian Jaimungal},
  journal={ArXiv},
  year={2018},
  volume={abs/1812.06600}
}
  • Brian Ning, Franco Ho Ting Ling, Sebastian Jaimungal
  • Published in ArXiv 2018
  • Economics, Computer Science, Mathematics
  • Optimal trade execution is an important problem faced by essentially all traders. Much research into optimal execution uses stringent model assumptions and applies continuous time stochastic control to solve them. Here, we instead take a model free approach and develop a variation of Deep Q-Learning to estimate the optimal actions of a trader. The model is a fully connected Neural Network trained using Experience Replay and Double DQN with input features given by the current state of the limit… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    14
    Twitter Mentions

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 28 REFERENCES

    Deep Reinforcement Learning with Double Q-Learning

    VIEW 12 EXCERPTS
    HIGHLY INFLUENTIAL

    Deep Reinforcement Learning with Double Q-Learning

    VIEW 12 EXCERPTS
    HIGHLY INFLUENTIAL

    Playing Atari with Deep Reinforcement Learning

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    Playing Atari with Deep Reinforcement Learning

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    Introduction to Reinforcement Learning

    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL

    Introduction to Reinforcement Learning

    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL