Skip to search formSkip to main contentSkip to account menu

Markov decision process

Known as: Value iteration, Policy iteration, Markov decision problems 
Markov decision processes (MDPs) provide a mathematical framework for modeling decision making in situations where outcomes are partly random and… 
Wikipedia (opens in a new tab)

Papers overview

Semantic Scholar uses AI to extract papers important to this topic.
Highly Cited
2011
Highly Cited
2011
We propose multigrid methods for solving Hamilton-Jacobi-Bellman (HJB) and HamiltonJacobi-Bellman-Isaacs (HJBI) equations. The… 
Highly Cited
2007
Highly Cited
2007
Distributed wireless mesh network technology is ready for public deployment in the near future. However, without an incentive… 
2006
2006
Semi-Markov decision processes on Borel spaces with deterministic kernels have many practical applications, particularly in… 
Highly Cited
1998
Highly Cited
1998
Recent research in decision theoretic planning has focussed on making the solution of Markov decision processes (MDPs) more… 
1992
1992
The two most commonly considered reward criteria for Markov decision processes are the discounted reward and the long-term… 
Review
1978
Review
1978
• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important… 
Review
1978
Review
1978
• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important… 
1977
1977
Markov decision processes which allow for an unbounded reward structure are considered. Conditions are given which allow…