Controlled Markov processes and viscosity solutions

@inproceedings{Fleming1992ControlledMP,
  title={Controlled Markov processes and viscosity solutions},
  author={Wendell H. Fleming and Halil Mete Soner},
  year={1992}
}
This book is intended as an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions. The authors approach stochastic control problems by the method of dynamic programming. The text provides an introduction to dynamic programming for deterministic optimal control problems, as well as to the corresponding theory of viscosity solutions. Also covered are controlled Markov diffusions and viscosity solutions of Hamilton-Jacobi-Bellman… Expand
Controlled markov processes, viscosity solutions and applications to mathematical finance
The purpose of this section is to give a concise, nontechnical introduction to stochastic differential equations and to controlled diffusion processes. In the first subsection, we provide a briefExpand
Solution of stochastic optimal control problems and financial applications
In this paper, the stochastic optimal control problems, which frequently occur in economic and finance are investigated. First, using Bellman’s dynamic programming method the stochastic optimalExpand
Tutorial for Viscosity Solutions in Optimal Control of Diffusions
This tutorial is an introduction to the theory of viscosity solutions of Hamilton-Jacobi-Bellman equations/inequalities in the realm of stochastic control problems. It is an easy to use reference forExpand
Optimal Control of Nonlinear Systems with Controlled Transitions
This paper studies the optimum stochastic control problem with piecewise deterministic dynamics. The controls enter through the system dynamics as well as the transitions for the underlying MarkovExpand
Viscosity Solutions for a System of Integro-PDEs and Connections to Optimal Switching and Control of Jump-Diffusion Processes
We develop a viscosity solution theory for a system of nonlinear degenerate parabolic integro-partial differential equations (IPDEs) related to stochastic optimal switching and control problems orExpand
Optimal control of piecewise deterministic nonlinear systems with controlled transitions: viscosity solutions, their existence and uniqueness
  • M. Xiao, T. Başar
  • Mathematics
  • Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No.99CH36304)
  • 1999
The paper studies viscosity solutions of two sets of Hamilton-Jacob-Bellman (HJB) equations (one for finite horizon and the other one for infinite horizon) which arise in the optimal control ofExpand
Optimal control of ultradiffusion processes with application to mathematical finance
  • M. Marcozzi
  • Mathematics, Computer Science
  • Int. J. Comput. Math.
  • 2015
TLDR
A method-of-lines finite element method is utilized to approximate the value function of a European style call option in a market subject to asset liquidity risk (including limit orders) and brokerage fees. Expand
Capacities, Measurable Selection and Dynamic Programming Part II: Application in Stochastic Control Problems
We aim to give an overview on how to derive the dynamic programming principle for a general stochastic control/stopping problem, using measurable selection techniques. By considering their martingaleExpand
Optimal Control Problems for Stochastic Reaction-Diffusion Systems with Non-Lipschitz Coefficients
  • S. Cerrai
  • Mathematics, Computer Science
  • SIAM J. Control. Optim.
  • 2001
TLDR
By using the dynamic programming approach, a control problem for a class of stochastic reaction-diffusion systems with coefficients having polynomial growth is studied, and a non-Lipschitz term appears in the cost functional, which allows the quadratic case to be treated. Expand
Stochastic control via direct comparison
TLDR
This paper introduces an alternative approach based on direct comparison of the performance of any two policies by modeling the state process as a continuous- time and continuous-state Markov process and applying the same ideas as for the discrete-time and discrete-state case. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 116 REFERENCES
Additive control of stochastic linear systems with finite horizon
We consider a dynamic system whose state is governed by a linear stochastic differential equation with time-dependent coefficients. The control acts additively on the state of the system. OurExpand
Exit probabilities and optimal stochastic control
This paper is concerned with Markov diffusion processes which obey stochastic differential equations depending on a small parameterε. The parameter enters as a coefficient in the noise term of theExpand
Piecewise‐Deterministic Markov Processes: A General Class of Non‐Diffusion Stochastic Models
A general class of non-diffusion stochastic models is introduced with a view to providing a framework for studying optimization problems arising in queueing systems, inventory theory, resourceExpand
Stochastic Control for Small Noise Intensities
This paper is concerned with the approximate solution of stochastic optimal control problems which arise by perturbing the system equations in the deterministic Pontryagin control model, through anExpand
Stochastic Control and Exit Probabilities of Jump Processes
A stochastic control problem is formulated for some problems related to Markov process. This formulation is in some sense a generalization of one used in [2], [3], [4], [8] for diffusion case. WeExpand
Optimal control of diffusion processes and hamilton–jacobi–bellman equations part 2 : viscosity solutions and uniqueness
We consider general optimal stochastic control problems and the associated Hamilton–Jacobi–Bellman equations. We develop a general notion of week solutions – called viscosity solutions – of theExpand
Singular perturbations in manufacturing
An asymptotic analysis for a large class of stochastic optimization problems arising in manufacturing is presented. A typical example of the problems considered in this paper is a production planningExpand
Regularity of the value function for a two-dimensional singular stochastic control problem
It is desired to control a two-dimensional Brownian motion by adding a (possibly singularly) continuous process to it so as to minimize an expected infinite-horizon discounted running cost. TheExpand
A class of singular stochastic control problems
We consider the problem of tracking a Brownian motion by a process of bounded variation, in such a way as to minimize total expected cost of both 'action' and 'deviation from a target state 0'. TheExpand
Some characterizations of optimal trajectories in control theory
Several characterizations of optimal trajectories for the classical Mayer problem in optimal control are provided. For this purpose the regularity of directional derivatives of the value function isExpand
...
1
2
3
4
5
...