#### Filter Results:

- Full text PDF available (78)

#### Publication Year

1975

2017

- This year (3)
- Last 5 years (9)
- Last 10 years (36)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Eric A. Hansen, Shlomo Zilberstein
- Artif. Intell.
- 2001

Classic heuristic search algorithms can find solutions that take the form of a simple path (A*), a tree, or an acyclic graph (AO*). In this paper, we describe a novel generalization of heuristic search, called LAO*, that can find solutions with loops. We show that LAO* can be used to solve Markov decision problems and that it shares the advantage heuristic… (More)

We develop an exact dynamic programming algorithm for partially observable stochastic games (POSGs). The algorithm is a synthesis of dynamic programming for partially observable Markov decision processes (POMDPs) and iterative elimination of dominated strategies in normal form games. We prove that it iteratively eliminates very weakly dominated strategies… (More)

- Eric A. Hansen
- UAI
- 1998

Most algorithms for solving POMDPs iteratively improve a value function that implicitly represents a policy and are said to search in value function space. This paper presents an approach to solving POMDPs that represents a policy explicitly as a nite-state controller and iteratively improves the controller by search in policy space. Two related algorithms… (More)

- Rong Zhou, Eric A. Hansen
- Artif. Intell.
- 2004

Recent work shows that the memory requirements of best-first heuristic search can be reduced substantially by using a divide-and-conquer method of solution reconstruction. We show that memory requirements can be reduced even further by using a breadth-first instead of a best-first search strategy. We describe optimal and approximate breadth-first heuristic… (More)

- Eric A. Hansen, Rong Zhou
- J. Artif. Intell. Res.
- 2007

We describe how to convert the heuristic search algorithm A* into an anytime algorithm that finds a sequence of improved solutions and eventually converges to an optimal solution. The approach we adopt uses weighted heuristic search to find an approximate solution quickly, and then continues the weighted search to find improved solutions as well as to… (More)

- Eric A. Hansen
- NIPS
- 1997

A new policy iteration algorithm for partially observable Markov decision processes is presented that is simpler and more eecient than an earlier policy iteration algorithm of Sondik (1971,1978). The key simpliication is representation of a policy as a nite-state controller. This representation makes policy evaluation straightforward. The pa-per's… (More)

- Eric A. Hansen, Rong Zhou
- ICAPS
- 2003

We develop a hierarchical approach to planning for partially observable Markov decision processes (POMDPs) in which a policy is represented as a hierarchical finite-state controller. To provide a foundation for this approach, we discuss some extensions of the POMDP framework that allow us to formalize the process of abstraction by which a hierarchical… (More)

- Eric A. Hansen, Shlomo Zilberstein
- Artif. Intell.
- 2001

Anytime algorithms offer a tradeoff between solution quality and computation time that has proved useful in solving time-critical problems such as planning and scheduling, belief network evaluation, and information gathering. To exploit this tradeoff, a system must be able to decide when to stop deliberation and act on the currently available solution. This… (More)

- Rong Zhou, Eric A. Hansen
- AAAI
- 2007

We describe a novel approach to parallelizing graph search using structured duplicate detection. Structured duplicate detection was originally developed as an approach to external-memory graph search that reduces the number of expensive disk I/O operations needed to check stored nodes for duplicates , by using an abstraction of the search graph to localize… (More)

- Eric A. Hansen
- AAAI
- 2007

For decision-theoretic planning problems with an indefinite horizon, plan execution terminates after a finite number of steps with probability one, but the number of steps until termination (i.e., the horizon) is uncertain and unbounded. In the traditional approach to modeling such problems, called a stochastic shortest-path problem, plan execution… (More)