Skip to search form
Skip to main content
Skip to account menu
Semantic Scholar
Semantic Scholar's Logo
Search 231,271,062 papers from all fields of science
Search
Sign In
Create Free Account
Markov decision process
Known as:
Value iteration
, Policy iteration
, Markov decision problems
Expand
Markov decision processes (MDPs) provide a mathematical framework for modeling decision making in situations where outcomes are partly random and…
Expand
Wikipedia
(opens in a new tab)
Create Alert
Alert
Related topics
Related topics
49 relations
Apprenticeship learning
Artificial neural network
Automatic control
Backward induction
Expand
Broader (2)
Dynamic programming
Stochastic control
Papers overview
Semantic Scholar uses AI to extract papers important to this topic.
Highly Cited
2011
Highly Cited
2011
Multigrid Methods for
Hamilton-Jacobi-Bellman
,
Hamilton-Jacobi Equations
,
D. Han
2011
Corpus ID: 2233819
We propose multigrid methods for solving Hamilton-Jacobi-Bellman (HJB) and HamiltonJacobi-Bellman-Isaacs (HJBI) equations. The…
Expand
2011
2011
Conjugate Markov Decision Processes
P. Thomas
,
A. Barto
International Conference on Machine Learning
2011
Corpus ID: 6097348
Many open problems involve the search for a mapping that is used by an algorithm solving an MDP. Useful mappings are often from…
Expand
2011
2011
Policy controlled self-configuration in unattended wireless sensor networks
S. Misra
,
Ankur Jain
Journal of Network and Computer Applications
2011
Corpus ID: 37183236
2010
2010
A model-free robust policy iteration algorithm for optimal control of nonlinear systems
S. Bhasin
,
Marcus Johnson
,
W. Dixon
IEEE Conference on Decision and Control
2010
Corpus ID: 2203236
An online model-free solution is developed for the infinite-horizon optimal control problem for continuous-time nonlinear systems…
Expand
2006
2006
Existence of Optimal Policies for Semi-Markov Decision Processes Using Duality for Infinite Linear Programming
D. Klabjan
,
Daniel Adelman
SIAM Journal of Control and Optimization
2006
Corpus ID: 11806830
Semi-Markov decision processes on Borel spaces with deterministic kernels have many practical applications, particularly in…
Expand
Review
1999
Review
1999
An Overview of Planning Under Certainty
J. Blythe
Artificial Intelligence Today
1999
Corpus ID: 16560403
The recent advances in computer speed and algorithms for probabilistic inference have led to a resurgence of work on planning…
Expand
1986
1986
Markov decision drift processes
F. D. D. Schouten
1986
Corpus ID: 124922742
In Markov decision theory we distinguish (a) discrete-time Markov decision processes (b) semi-Markov decision…
Expand
Review
1978
Review
1978
Contracting Markov decision processes
V. Nunen
1978
Corpus ID: 126118263
• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important…
Expand
Review
1978
Review
1978
Contracting Markov Decision Processes
A. Unwin
1978
Corpus ID: 62472393
• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important…
Expand
1977
1977
Markov decision processes with unbounded rewards
J. Wessels
,
V. Nunen
1977
Corpus ID: 118917333
Markov decision processes which allow for an unbounded reward structure are considered. Conditions are given which allow…
Expand
By clicking accept or continuing to use the site, you agree to the terms outlined in our
Privacy Policy
(opens in a new tab)
,
Terms of Service
(opens in a new tab)
, and
Dataset License
(opens in a new tab)
ACCEPT & CONTINUE