• Corpus ID: 211204701

Non-asymptotic and Accurate Learning of Nonlinear Dynamical Systems

@article{Sattar2020NonasymptoticAA,
  title={Non-asymptotic and Accurate Learning of Nonlinear Dynamical Systems},
  author={Yahya Sattar and Samet Oymak},
  journal={ArXiv},
  year={2020},
  volume={abs/2002.08538}
}
We consider the problem of learning stabilizable systems governed by nonlinear state equation $h_{t+1}=\phi(h_t,u_t;\theta)+w_t$. Here $\theta$ is the unknown system dynamics, $h_t $ is the state, $u_t$ is the input and $w_t$ is the additive noise vector. We study gradient based algorithms to learn the system dynamics $\theta$ from samples obtained from a single finite trajectory. If the system is run by a stabilizing input policy, we show that temporally-dependent samples can be approximated… 

Figures and Tables from this paper

Learning nonlinear dynamical systems from a single trajectory

This work gives a general recipe whereby global stability for nonlinear dynamical systems can be used to certify that the state-vector covariance is well-conditioned, and uses these tools, well-known algorithms for efficiently learning generalized linear models to the dependent setting.

Active Learning for Nonlinear System Identification with Guarantees

This work studies a class of nonlinear dynamical systems whose state transitions depend linearly on a known feature embedding of state-action pairs and proposes an active learning approach that achieves this by repeating three steps: trajectory planning, trajectory tracking, and re-estimation of the system from all available data.

Near-optimal Offline and Streaming Algorithms for Learning Non-Linear Dynamical Systems

This work provides the first offline algorithm that can learn non-linear dynamical systems without the mixing assumption, and demonstrates that for correlated data, specialized methods designed for the dependency structure in data can significantly improve upon the sample complexity of existing results for mixing systems.

Online Stochastic Gradient Descent Learns Linear Dynamical Systems from A Single Trajectory

This work shows that SGD converges linearly in expectation to any arbitrary small Frobenius norm distance from the ground truth weights, and is the first work to establish linear convergence characteristics for online and offline gradient-based iterative methods for weight matrix estimation in linear dynamical systems from a single trajectory.

Exact Asymptotics for Linear Quadratic Adaptive Control

This work is able to derive asymptotically-exact expressions for the regret, estimation error, and prediction error of a rate-optimal stepwise-updating LQAC algorithm by carefully combining recent finite-sample performance bounds with a particular martingale central limit theorem.

Learning the Linear Quadratic Regulator from Nonlinear Observations

A new algorithm, RichID, is introduced, which learns a near-optimal policy for the RichLQR with sample complexity scaling only with the dimension of the latent state space and the capacity of the decoder function class.

Nonlinear System Identification With Prior Knowledge on the Region of Attraction

An identification method in the form of an optimization problem, minimizing the fitting error and guaranteeing the desired stability property is proposed, which admits a solution in form of a linear combination of the sections of the kernel and its derivatives.

Safe Adaptive Learning-based Control for Constrained Linear Quadratic Regulators with Regret Guarantees

A polynomial-time algorithm is proposed that guarantees feasibility and constraint satisfaction with high probability under proper conditions of an unknown linear system with a quadratic cost function subject to safety constraints on both the states and actions.

Convex Nonparametric Formulation for Identification of Gradient Flows

A nonparametric identification method for nonlinear gradient-flow dynamics that derive an equivalent finite dimensional formulation, which is a convex optimization in the form of a quadratic program provides scalability and the opportunity for utilizing recently developed large-scale optimization solvers.

Generalization Guarantees for Neural Architecture Search with Train-Validation Split

It is revealed that the upper-level problem helps select the most generalizable model and prevent overfitting with a near-minimal validation sample size and generalization bounds are established for continuous search spaces which are highly relevant for popular differentiable search schemes.

References

SHOWING 1-10 OF 84 REFERENCES

Stochastic Gradient Descent Learns State Equations with Nonlinear Activations

It is proved that SGD estimate linearly converges to the ground truth weights while using near-optimal sample size and a novel SGD convergence result with nonlinear activations is published.

Finite-time Analysis of Approximate Policy Iteration for the Linear Quadratic Regulator

A simple adaptive procedure based on $\varepsilon$-greedy exploration which relies on approximate PI as a sub-routine and obtains regret is constructed, improving upon a recent result of Abbasi-Yadkori et al.

How fast can linear dynamical systems be learned?

Finite time error bounds for estimating general linear time-invariant systems from a single observed trajectory using the method of least squares are provided and it is demonstrated that the least squares solution may be statistically inconsistent under certain conditions even when the signal-to-noise ratio is high.

Stability Bounds for Non-i.i.d. Processes

Novel stability-based generalization bounds that hold even with this more general setting are proved, which strictly generalize the bounds given in the i.i.d. case.

Regret Analysis for Adaptive Linear-Quadratic Policies

To establish high probability regret bounds for the classical problem of Linear-Quadratic control, certain novel techniques are introduced to comprehensively address the probabilistic behavior of dependent random matrices with heavy-tailed distributions.

Near optimal finite time identification of arbitrary linear dynamical systems

This work derives finite time error bounds for estimating general linear time-invariant (LTI) systems from a single observed trajectory using the method of least squares and demonstrates that the least squares solution may be statistically inconsistent under certain conditions even when the signal-to-noise ratio is high.

Generalization bounds for non-stationary mixing processes

The first generalization bounds for time series prediction with a non-stationary mixing stochastic process is presented and it is proved that fast learning rates can be achieved by extending existing local Rademacher complexity analyses to the non-i.i.d. data.

Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems

This work characterizes the convergence rate of a canonical stochastic, two-point, derivative-free method for linear-quadratic systems in which the initial state of the system is drawn at random, and shows that for problems with effective dimension $D$, such a method converges to an $\epsilon$-approximate solution within $\widetilde{\mathcal{O}}(D/\ep silon)$ steps.

Certainty Equivalent Control of LQR is Efficient

The results show that certainty equivalent control with $\varepsilon$-greedy exploration achieves $\tilde{\mathcal{O}}(\sqrt{T})$ regret in the adaptive LQR setting, yielding the first guarantee of a computationally tractable algorithm that achieves nearly optimal regret for adaptive L QR.
...