Planning and control in stochastic domains with imperfect information


Partially observable Markov decision processes POMDPs can be used to model complex con trol problems that include both action outcome uncertainty and imperfect observability A control problem within the POMDP framework is expressed as a dynamic optimization prob lem with a value function that combines costs or rewards from multiple steps Although the POMDP… (More)



Citations per Year

96 Citations

Semantic Scholar estimates that this publication has 96 citations based on the available data.

See our FAQ for additional information.

  • Presentations referencing similar topics