Skip to search form
Skip to main content
Skip to account menu
Semantic Scholar
Semantic Scholar's Logo
Search 227,741,292 papers from all fields of science
Search
Sign In
Create Free Account
Partially observable Markov decision process
Known as:
POMDP
, Partially observable Markov decision problem
A partially observable Markov decision process (POMDP) is a generalization of a Markov decision process (MDP). A POMDP models an agent decision…
Expand
Wikipedia
(opens in a new tab)
Create Alert
Alert
Related topics
Related topics
18 relations
Artificial intelligence
Automated planning and scheduling
Bellman equation
Computational complexity theory
Expand
Broader (2)
Dynamic programming
Stochastic control
Papers overview
Semantic Scholar uses AI to extract papers important to this topic.
2018
2018
Session search modeling by partially observable Markov decision process
G. Yang
,
Xuchu Dong
,
Jiyun Luo
,
Sicong Zhang
Information Retrieval Journal
2018
Corpus ID: 23610696
Session search, the task of document retrieval for a series of queries in a session, has been receiving increasing attention from…
Expand
2014
2014
UAVGuidance Algorithms via Partially Observable Markov Decision Processes
Shankarachary Ragi
,
E. Chong
2014
Corpus ID: 64396684
The goal here is to design a path-planning algorithm to guide unmanned aerial vehicles (UAVs) for tracking multiple ground…
Expand
2009
2009
Voice activity detection using partially observable Markov decision process
Chi-youn Park
,
Namhoon Kim
,
Jeongmi Cho
Interspeech
2009
Corpus ID: 39609277
Partially observable Markov decision process (POMDP) has been generally used to model agent decision processes such as dialogue…
Expand
2009
2009
Misplaced item search in a warehouse using an RFID-based Partially Observable Markov Decision Process (POMDP) model
S. Hariharan
,
S. Bukkapatnam
IEEE International Conference on Automation…
2009
Corpus ID: 16720910
Inventory misplacement and inaccuracies contribute significantly to the operational expense of the overall supply chain. Radio…
Expand
2008
2008
Automated Upper Extremity Rehabilitation for Stroke Patients Using a Partially Observable Markov Decision Process
Patricia Kan
,
J. Hoey
,
Alex Mihailidis
AAAI Fall Symposium: AI in Eldercare: New…
2008
Corpus ID: 1739451
This paper presents a real-time system that guides stroke patients during upper extremity rehabilitation. The system…
Expand
2007
2007
Mixed Reinforcement Learning for Partially Observable Markov Decision Process
L. Dung
,
T. Komeda
,
M. Takagi
International Symposium on Computational…
2007
Corpus ID: 18515266
Reinforcement learning has been widely used to solve problems with a little feedback from environment. Q learning can solve full…
Expand
2004
2004
Reinforcement learning algorithm for partially observable Markov decision processes
Xu Xin
2004
Corpus ID: 123943208
In partially observable markov decision processes(POMDP), due to perceptual aliasing, the memoryless policies obtained by Sarsa…
Expand
2001
2001
Learning hierarchical observable Markov decision process models for robot navigation
Georgios Theocharous
,
Khashayar Rohanimanesh
,
S. Mahadevan
Proceedings ICRA. IEEE International Conference…
2001
Corpus ID: 14597950
We propose and investigate a general framework for hierarchical modeling of partially observable environments, such as office…
Expand
Highly Cited
1994
Highly Cited
1994
Optimal Policies for Partially Observable Markov Decision Processes
A. Cassandra
1994
Corpus ID: 60888474
The main objective of this report is to provide implementation details for the more popular exact algorithms for solving finite…
Expand
Review
1988
Review
1988
On the adaptive control of a partially observable Markov decision process
E. Fernández-Gaucherand
,
A. Arapostathis
,
S. Marcus
Proceedings of the 27th IEEE Conference on…
1988
Corpus ID: 123075493
The study represents the initial stages of a program to address the adaptive control of partially observable Markov decision…
Expand
By clicking accept or continuing to use the site, you agree to the terms outlined in our
Privacy Policy
(opens in a new tab)
,
Terms of Service
(opens in a new tab)
, and
Dataset License
(opens in a new tab)
ACCEPT & CONTINUE