Skip to search formSkip to main contentSkip to account menu

Markov decision process

Known as: Value iteration, Policy iteration, Markov decision problems 
Markov decision processes (MDPs) provide a mathematical framework for modeling decision making in situations where outcomes are partly random and… 
Wikipedia (opens in a new tab)

Papers overview

Semantic Scholar uses AI to extract papers important to this topic.
Highly Cited
2011
Highly Cited
2011
We propose multigrid methods for solving Hamilton-Jacobi-Bellman (HJB) and HamiltonJacobi-Bellman-Isaacs (HJBI) equations. The… 
2011
2011
Many open problems involve the search for a mapping that is used by an algorithm solving an MDP. Useful mappings are often from… 
2010
2010
An online model-free solution is developed for the infinite-horizon optimal control problem for continuous-time nonlinear systems… 
2006
2006
Semi-Markov decision processes on Borel spaces with deterministic kernels have many practical applications, particularly in… 
Review
1999
Review
1999
  • J. Blythe
  • 1999
  • Corpus ID: 16560403
The recent advances in computer speed and algorithms for probabilistic inference have led to a resurgence of work on planning… 
1986
1986
In Markov decision theory we distinguish (a) discrete-time Markov decision processes (b) semi-Markov decision… 
Review
1978
Review
1978
• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important… 
Review
1978
Review
1978
• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important… 
1977
1977
Markov decision processes which allow for an unbounded reward structure are considered. Conditions are given which allow…