How to Dynamically Merge Markov Decision Processes

@inproceedings{Singh1997HowTD,
  title={How to Dynamically Merge Markov Decision Processes},
  author={Satinder P. Singh and David Cohn},
  booktitle={NIPS},
  year={1997}
}
We are frequently called upon to perform multiple tasks that compete for our attention and resource. Often we know the optimal solution to each task in isolation; in this paper, we describe how this knowledge can be exploited to efficiently find good solutions for doing the tasks in parallel. We formulate this problem as that of dynamically merging multiple Markov decision processes (MDPs) into a composite MDP, and present a new theoretically-sound dynamic programming algorithm for finding an… CONTINUE READING
Highly Influential
This paper has highly influenced 13 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 136 citations. REVIEW CITATIONS

Citations

Publications citing this paper.
Showing 1-10 of 82 extracted citations

Learning and Coordinating Repertoires of Behaviors with Common Reward: Credit Assignment and Module Activation

Computational and Robotic Models of the Hierarchical Organization of Behavior • 2013
View 19 Excerpts
Highly Influenced

Lagrangian Relaxation for Large-Scale Multi-agent Planning

2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology • 2012
View 8 Excerpts
Highly Influenced

136 Citations

01020'98'03'09'15
Citations per Year
Semantic Scholar estimates that this publication has 136 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.

Similar Papers

Loading similar papers…