Corpus ID: 216562568

Sample-Efficient Model-based Actor-Critic for an Interactive Dialogue Task

@article{Kudashkina2020SampleEfficientMA,
  title={Sample-Efficient Model-based Actor-Critic for an Interactive Dialogue Task},
  author={Katya Kudashkina and Valliappa Chockalingam and Graham W. Taylor and Michael H. Bowling},
  journal={ArXiv},
  year={2020},
  volume={abs/2004.13657}
}
Human-computer interactive systems that rely on machine learning are becoming paramount to the lives of millions of people who use digital assistants on a daily basis. Yet, further advances are limited by the availability of data and the cost of acquiring new samples. One way to address this problem is by improving the sample efficiency of current approaches. As a solution path, we present a model-based reinforcement learning algorithm for an interactive dialogue task. We build on commonly used… Expand

References

SHOWING 1-10 OF 32 REFERENCES
End-to-End Reinforcement Learning of Dialogue Agents for Information Access
Sample Efficient Deep Reinforcement Learning for Dialogue Systems With Large Action Spaces
Model-Based Reinforcement Learning for Atari
Deep Reinforcement Learning for Dialogue Generation
Interactive reinforcement learning for task-oriented dialogue management
Using Natural Language for Reward Shaping in Reinforcement Learning
Towards End-to-End Learning for Dialog State Tracking and Management using Deep Reinforcement Learning
Learning dialogue strategies within the Markov decision process framework
Dyna, an integrated architecture for learning, planning, and reacting
...
1
2
3
4
...