Optimal decisions with limited information
- A. Gattami
- Ph.D. dissertation, Department of Automatic…
We revisit the classical H∞ analysis problem of computing the `-induced norm of a linear time-invariant system. We follow an approach based on converting the problem of maximization over signals to that of maximization over a sort of deterministic covariance matrices. The reformulation in terms of these covariance matrices greatly simplifies the dynamic analysis problem and converts the computation to a convex, constrained matrix maximization problem. Furthermore, the equivalence is for the actual H∞ norm of the system rather than a bound, and thus does not require the typical “gamma iterations”. We argue that this approach is both attractive, elementary, and constructive in that the worst case disturbance is also easily obtained as a state feedback constructed from the solution of the matrix problem. We give an illustrative example with some interpretations of the results. NOTATION R The set of real numbers. Sn The set of n× n symmetric matrices. S+ The set of n× n symmetric positive semidefinite matrices. S++ The set of n× n symmetric positive definite matrices. A B ⇐⇒ A−B ∈ S+. A B ⇐⇒ A−B ∈ S++. Tr Tr (A) is the trace of the matrix A. In Denotes the n× n identity matrix. 0m×n Denotes the m× n zero matrix.