#### Filter Results:

- Full text PDF available (18)

#### Publication Year

1994

2007

- This year (0)
- Last 5 years (0)
- Last 10 years (1)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Leslie Pack Kaelbling, Michael L. Littman, Anthony R. Cassandra
- Artif. Intell.
- 1998

In this paper, we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains. We begin by introducing the theory of Markov decision processes (mdps) and partially observable mdps (pomdps). We then outline a novel algorithm for solving pomdps o line and show how, in some cases, a… (More)

Partially observable Markov decision processes (pomdp's) model decision problems in which an agent tries to maximize its reward in the face of limited and/or noisy sensor feedback. While the study of pomdp's is motivated by a need to address realistic problems, existing techniques for nding optimal behavior do not appear to scale well and have been unable… (More)

In this paper, we describe the partially observable Markov decision process (pomdp) approach to nding optimal or near-optimal control strategies for partially observable stochastic environments, given a complete model of the environment. The pomdp approach was originally developed in the operations research community and provides a formal basis for planning… (More)

Discrete Bayesian models have been used to model uncertainty for mobile-robot navigation, but the question of how actions should be chosen remains largely unexplored. This paper presents the optimal solution to the problem, formulated as a partially observable Markov decision process. Since solving for the optimal control policy is intractable, in general,… (More)

Most exact algorithms for general par tially observable Markov decision processes (POMDPs) use a form of dynamic program ming in which a piecewise-linear and con vex representation of one value function is transformed into another. We examine vari ations of the "incremental pruning" method for solving this problem and compare them to earlier algorithms… (More)

Solving partially observable Markov decision processes (POMDPs) is highly intractable in general, at least in part because the optimal policy may be infinitely large. In this paper, we explore the problem of finding the optimal policy from a restricted set of policies, represented as finite state automata of a given size. This problem is also intractable,… (More)

Partially observable Markov decision processes (pomdp's) model decision problems in which an agent tries to maximize its reward in the face of limited and/or noisy sensor feedback. While the study of pomdp's is motivated by a need to address realistic problems , existing techniques for nding optimal behavior do not appear to scale well and have been unable… (More)

- Eugene Charniak, Glenn Carroll, +5 authors John McCann
- Artif. Intell.
- 1996

We consider what tagging models are most appropriate as front ends for probabilistic context-free-grammar parsers. In particular, we ask if using a tagger that returns more than one tag, a \multiple tagger," improves parsing performance. Our conclusion is somewhat surprising: single-tag Markov-model taggers are quite adequate for the task. First of all,… (More)

An increasing number of researchers in many areas are becoming interested in the application of the partially observable Markov decision process (pomdp) model to problems with hidden state. This model can account for both state transition and observation uncertainty. The majority of recent research interest in the pomdp model has been in the artificial… (More)