From policies to influences: a framework for nonlocal abstraction in transition-dependent Dec-POMDP agents

@inproceedings{Witwicki2010FromPT,
  title={From policies to influences: a framework for nonlocal abstraction in transition-dependent Dec-POMDP agents},
  author={Stefan J. Witwicki and Edmund H. Durfee},
  booktitle={AAMAS},
  year={2010}
}
Decentralized Partially-Observable Markov Decision Processes (Dec-POMDPs) are powerful theoretical models for deriving optimal coordination policies of agent teams in environments with uncertainty. Unfortunately, their general NEXP solution complexity [3] presents significant challenges when applying them to real-world problems, particularly those involving teams of more than two agents. Inevitably, the policy space becomes intractably large as agents coordinate joint decisions that are based… CONTINUE READING
3 Citations
7 References
Similar Papers

References

Publications referenced by this paper.
Showing 1-7 of 7 references

Taming decentralized POMDPs: Towards efficient policy computation for multiagent settings

  • R. Nair, M. Tambe, M. Yokoo, D. V. Pynadath, S. Marsella
  • IJCAI, pages 705–711
  • 2003
3 Excerpts

The complexity of decentralized control of Markov decision processes

  • D. Bernstein, R. Givan, N. Immerman, S. Zilberstein
  • Mathematics of Operations Research, 27(4):819–840
  • 2002
2 Excerpts

Similar Papers

Loading similar papers…