Corpus ID: 211818078

Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?

@article{Ota2020CanII,
  title={Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?},
  author={K. Ota and Tomoaki Oiki and Devesh K. Jha and Toshisada Mariyama and D. Nikovski},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.01629}
}
  • K. Ota, Tomoaki Oiki, +2 authors D. Nikovski
  • Published 2020
  • Computer Science, Mathematics
  • ArXiv
  • Deep reinforcement learning (RL) algorithms have recently achieved remarkable successes in various sequential decision making tasks, leveraging advances in methods for training large deep networks. However, these methods usually require large amounts of training data, which is often a big problem for real-world applications. One natural question to ask is whether learning good representations for states and using larger networks helps in learning better policies. In this paper, we try to study… CONTINUE READING
    1 Citations
    D2RL: Deep Dense Architectures in Reinforcement Learning
    • 1
    • PDF

    References

    SHOWING 1-10 OF 37 REFERENCES
    Learning state representation for deep actor-critic control
    • 33
    • Highly Influential
    • PDF
    Model-Ensemble Trust-Region Policy Optimization
    • 145
    • PDF
    Diagnosing Bottlenecks in Deep Q-learning Algorithms
    • 40
    • PDF
    Benchmarking Deep Reinforcement Learning for Continuous Control
    • 891
    • PDF
    Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
    • 988
    • Highly Influential
    • PDF
    Reinforcement Learning with Unsupervised Auxiliary Tasks
    • 640
    • PDF
    Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates
    • 591
    • PDF
    Deep Reinforcement Learning that Matters
    • 668
    • PDF