Dynamic Programming Conditions for Partially Observable Stochastic Systems

@article{Davis1973DynamicPC,
  title={Dynamic Programming Conditions for Partially Observable Stochastic Systems},
  author={M. H. A. Davis and Pravin Pratap Varaiya},
  journal={Siam Journal on Control},
  year={1973},
  volume={11},
  pages={226-261}
}
In this paper necessary and sufficient conditions for optimality are derived for systems described by stochastic differential equations with control based on partial observations. The solution of the system is defined in a way which permits a very wide class of admissible controls, and then Hamilton–Jacobi criteria for optimality are derived from a version of Bellman’s “principle of optimality.”The method of solution is based on a result of Girsanov : Wiener measure is transformed for each… 

Stochastic Control with Complete Observations on a Finite Horizon

Optimal stochastic control problems are formulated for a stochastic control system with complete observations on a finite horizon. Dynamic programming yields necessary and sufficient conditions for

A Minimum Principle for Decentralized Stochastic Control Problems

The notion of the “state” of a dynamical system has turned out, in retrospect, to be the single most important concept in system theory, and the representation of systems in state space form is now

A necessary condition for optimality in a problem of stochastic control with discretized observations

The methodology consists in applying the variational theory of L. W Neustadt to derive a necessary condition for optimality which is analogous to the Pontriagin maximum principle in the deterministic optimal control.

Optimal Stabilization of Linear Stochastic System with Statistically Uncertain Piecewise Constant Drift

The paper presents an optimal control problem for the partially observable stochastic differential system driven by an external Markov jump process. The available controlled observations are indirect

Stochastic continuous control of partially observed systems via impulse control problems

The stochastic control problem of partially observed systems is approximated by a sequence of partially observed impulse control problems with non-zero but vanishing impulse costs. For the latter

Application of Conditional-Optimal Filter for Synthesis of Suboptimal Control in the Problem of Optimizing the Output of a Nonlinear Differential Stochastic System

  • A. Bosov
  • Computer Science
    Autom. Remote. Control.
  • 2020
This work proposes an alternative to the traditional practical approach for the synthesis of suboptimal control in a problem with incomplete information, which consists in a formal substitution in the solution of a state with its estimate of a conditionally optimal filter.

Optimal control of semi-Markov processes with a backward stochastic differential equations approach

It is proved that the value function and the optimal control law can be represented by means of the solution of a class of BSDEs driven by a semi-Markov process or, equivalently, by the associated random measure.

Martingale methods in stochastic control

Abstract : The martingale treatment of stochastic control problems is based on the idea that the correct formulation of Bellman's principle of optimality for stochastic minimization problems is in

Sequential Stochastic Control (Single or Multi-Agent) Problems Nearly Admit Change of Measures with Independent Measurements

Change of measures has been an effective method in stochastic control and analysis; in continuous-time control this follows Girsanov’s theorem applied to both fully observed and partially observed
...

References

T.E.Duncan and P.P.Varaiya, On the solutions of a stochastic control system

  • SIAM J. Control £
  • 1971