Xianping Guo

Learn More
—In this paper, we give conditions for the existence of average optimal policies for continuous-time controlled Markov chains with a denumerable state–space and Borel action sets. The transition rates are allowed to be unbounded, and the reward/cost rates may have neither upper nor lower bounds. In the spirit of the " drift and monotonicity " conditions for(More)
—In a partially observable Markov decision process (POMDP), if the reward can be observed at each step, then the observed reward history contains information on the unknown state. This information, in addition to the information contained in the observation history, can be used to update the state probability distribution. The policy thus obtained is called(More)