Joy, distress, hope, and fear in reinforcement learning


In this paper we present a mapping between joy, distress, hope and fear, and Reinforcement Learning primitives. Joy / distress is a signal that is derived from the RL update signal, while hope/fear is derived from the utility of the current state. Agent-based simulation experiments replicate psychological and behavioral dynamics of emotion including: joy and distress reactions that develop prior to hope and fear; fear extinction; habituation of joy; and, task randomness that increases the intensity of joy and distress. This work distinguishes itself by assessing the dynamics of emotion in an adaptive agent framework coupling it to the literature on habituation, development, and extinction.

Extracted Key Phrases

Cite this paper

@inproceedings{Jacobs2014JoyDH, title={Joy, distress, hope, and fear in reinforcement learning}, author={Elmer Jacobs and Joost Broekens and Catholijn M. Jonker}, booktitle={AAMAS}, year={2014} }