Controlled exploration of state space in off-line ADP and its application to stochastic shortest path problems


This paper addresses the problemof finding a control policy that drives a generic discrete event stochastic system from an initial state to a set of goal states with a specified probability. The control policy is iteratively constructed via an approximate dynamic programming (ADP) technique over a small subset of the state space that is evolved via Monte… (More)
DOI: 10.1016/j.compchemeng.2009.06.012


10 Figures and Tables

Slides referencing similar topics