#### Filter Results:

- Full text PDF available (17)

#### Publication Year

2000

2014

- This year (0)
- Last 5 years (2)
- Last 10 years (3)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Eric A. Hansen, Zhengzhu Feng
- AIPS
- 2000

Contingent planning -constructing a plan in which action seh,ction is contingent on imperfect infornlation received during plan execution can be forma]ized ~s the problem of solving a partially observabh, Markov decision process (POMDP). Traditional dynamic programmiug algorittmm for POMDPs use a flat state representation that enunmrat(.s all possible… (More)

- Zhengzhu Feng, Eric A. Hansen
- AAAI/IAAI
- 2002

We describe a plnning algorithm that integrates two approaches to solving Markov decision processes with large state spaces. State abstraction is used to avoid evaluating states individually. Forward search from a start state, guided by an admissible heuristic, is used to avoid evaluating all states. We combine these two approaches in a novel way that… (More)

We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it… (More)

- Zhengzhu Feng, Shlomo Zilberstein
- UAI
- 2004

We present a major improvement to the incremental pruning algorithm for solving partially observable Markov decision processes. Our technique targets the cross-sum step of the dynamic programming (DP) update, a key source of complexity in POMDP algorithms. Instead of reasoning about the whole belief space when pruning the cross-sums, our algorithm divides… (More)

Symbolic representations have been used successfully in off-line planning algorithms for Markov decision processes. We show that they can also improve the performance of online planners. In addition to reducing computation time, symbolic generalization can reduce the amount of costly real-world interactions required for convergence. We introduce Symbolic… (More)

In a peer-to-peer file-sharing system, a client desiring a particular file must choose a source from which to download. The problem of selecting a good data source is difficult because some peers may not be encountered more than once, and many peers are on low-bandwidth connections. Despite these facts, information obtained about peers just prior to the… (More)

- Eric A. Hansen, Rong Zhou, Zhengzhu Feng
- SARA
- 2002

We show how to use symbolic model-checking techniques in heuristic search algorithms for both deterministic and decision-theoretic planning problems. A symbolic approach exploits state abstraction by using decision diagrams to compactly represent sets of states and operators on sets of states. In earlier work, symbolic model-checking techniques have been… (More)

We describe a planning algorithm that integrates two approaches to solving Markov decision processes with large state spaces. It uses state abstraction to avoid evaluating states individually. And it uses forward search from a start state, guided by an admissible heuristic, to avoid evaluating all states. These approaches are combined in a novel way that… (More)

- Zhengzhu Feng, Shlomo Zilberstein
- AAAI
- 2005

We present a simple, yet effective improvement to the dynamic programming algorithm for solving partially observable Markov decision processes. The technique targets the vector pruning operation during the maximization step, a key source of complexity in POMDP algorithms. We identify two types of structures in the belief space and exploit them to reduce… (More)

- MARTIN WILLIAM ALLEN, Alan Carlin, +11 authors Louis Theran
- 2009

AGENT INTERACTIONS IN DECENTRALIZED ENVIRONMENTS