Approximate Dynamic Programming via Iterated Bellman Inequalities

@inproceedings{Wang2010ApproximateDP,
  title={Approximate Dynamic Programming via Iterated Bellman Inequalities},
  author={Yang Wang and Brendan O’Donoghue and Stephen Boyd},
  year={2010}
}
In this paper we introduce new methods for finding functions that lower bound the value function of a stochastic control problem, using an iterated form of the Bellman inequality. Our method is based on solving linear or semidefinite programs, and produces both a bound on the optimal objective, as well as a suboptimal policy that appears to work very well. These results extend and improve bounds obtained in a previous paper using a single Bellman inequality condition. We describe the methods in… CONTINUE READING
Highly Influential
This paper has highly influenced 10 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 54 citations. REVIEW CITATIONS

Citations

Publications citing this paper.
Showing 1-10 of 31 extracted citations

55 Citations

051015'09'11'13'15'17
Citations per Year
Semantic Scholar estimates that this publication has 55 citations based on the available data.

See our FAQ for additional information.

Similar Papers

Loading similar papers…