Domain decomposition for stochastic optimal control

@article{Horowitz2014DomainDF,
  title={Domain decomposition for stochastic optimal control},
  author={Matanya B. Horowitz and Ivan Papusha and Joel W. Burdick},
  journal={53rd IEEE Conference on Decision and Control},
  year={2014},
  pages={1866-1873}
}
This work proposes a method for solving linear stochastic optimal control (SOC) problems using sum of squares and semidefinite programming. Previous work had used polynomial optimization to approximate the value function, requiring a high polynomial degree to capture local phenomena. To improve the scalability of the method to problems of interest, a domain decomposition scheme is presented. By using local approximations, lower degree polynomials become sufficient, and both local and global… 

Figures and Tables from this paper

Efficient Methods for Stochastic Optimal Control
TLDR
The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control and a novel method for Uncertainty Quantification of systems governed by partial differential constraints.
A semidefinite programming approach for stochastic switched optimal control problems
We consider solving stochastic switched optimal control problems (SSOCPs) with polynomial data using sum-of-squares (S.O.S) and semidefinite programming (SDP), where both the number of switches and
Optimal Controller Synthesis for Nonlinear Dynamical Systems
TLDR
It is shown how the viscosity solutions may be made arbitrarily close to the optimal solution via a hierarchy of semidefinite optimization problems and develops apriori bounds on trajectory suboptimality when using approximate value functions.
Suboptimal stabilizing controllers for linearly solvable system
TLDR
The classical nonlinear Hamilton-Jacobi-Bellmanpartial differential equation is transformed into a linear partial differential equation for a class of systems with a particular constraint on the stochastic disturbance, allowing for approximating polynomial solutions to be generated using sum of squares programming.
Linearly Solvable Stochastic Control Lyapunov Functions
TLDR
A-priori bounds on trajectory suboptimality when using approximate value functions are developed, as well as demonstrating that these methods, and bounds, can be applied to a more general class of nonlinear systems not obeying the constraint on stochastic forcing.
Optimal Controller Synthesis for Nonlinear Systems
TLDR
This thesis aims to close the gaps by proposing optimal controller synthesis techniques for two classes of nonlinear systems: linearly solvableNonlinear systems and hybrid non linear systems by proposing methods to synthesize optimal controllers for quantitative objectives and qualitative objectives.
Robustness, Adaptation, and Learning in Optimal Control
TLDR
This thesis tackles the design and verification complexity of implementable control architectures for smart systems while certifying their safety, robustness, and performance by carefully employing tractable lower and upper bounds on the Lyapunov function, while making connections to robust control, formal synthesis, and machine learning.
Sparse Deconvolution with Applications to Spike Sorting
TLDR
This thesis introduces a sparse deconvolution approach to spike detection, which seeks to detect spikes and represent them as the linear combination of basis waveforms, and introduces a clustering algorithm based around a mixture of drifting t -distributions.

References

SHOWING 1-10 OF 39 REFERENCES
Semidefinite relaxations for stochastic optimal control policies
TLDR
This work proposes a new method obtaining approximate solutions to these linear stochastic optimal control (SOC) problems, and a candidate polynomial with variable coefficients is proposed as the solution to the SOC problem.
Efficient Methods for Stochastic Optimal Control
TLDR
The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control and a novel method for Uncertainty Quantification of systems governed by partial differential constraints.
A UNIFIED THEORY OF LINEARLY SOLVABLE OPTIMAL CONTROL
TLDR
A unified theory of Linearly Solvable Optimal Control is presented, that is, a class of optimal control problems whose solution reduces to solving a linear equation or a linear integral equation either for finite state spaces or for continuous state spaces.
Nonlinear Optimal Control via Occupation Measures and LMI-Relaxations
TLDR
This work provides a simple hierarchy of LMI- (linear matrix inequality)-relaxations whose optimal values form a nondecreasing sequence of lower bounds on the optimal value of the OCP under some convexity assumptions.
Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization
In the first part of this thesis, we introduce a specific class of Linear Matrix Inequalities (LMI) whose optimal solution can be characterized exactly. This family corresponds to the case where the
Path integrals and symmetry breaking for optimal control theory
This paper considers linear-quadratic control of a non-linear dynamical system subject to arbitrary cost. I show that for this class of stochastic control problems the non-linear
A Generalized Path Integral Control Approach to Reinforcement Learning
TLDR
The framework of stochastic optimal control with path integrals is used to derive a novel approach to RL with parameterized policies to demonstrate interesting similarities with previous RL research in the framework of probability matching and provides intuition why the slightly heuristically motivated probability matching approach can actually perform well.
Linear Hamilton Jacobi Bellman Equations in high dimensions
TLDR
This work combines recent results in the structure of the HJB, and its reduction to a linear Partial Differential Equation (PDE), with methods based on low rank tensor representations, known as a separated representations, to address the curse of dimensionality.
Controlled Markov processes and viscosity solutions
This book is intended as an introduction to optimal stochastic control for continuous time Markov processes and to the theory of viscosity solutions. The authors approach stochastic control problems
Linear theory for control of nonlinear stochastic systems.
  • H. Kappen
  • Computer Science
    Physical review letters
  • 2005
TLDR
The role of noise and the issue of efficient computation in stochastic optimal control problems are addressed and a class of nonlinear control problems that can be formulated as a path integral and where the noise plays the role of temperature is considered.
...
...