• Corpus ID: 1731857

Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images

@article{Watter2015EmbedTC,
  title={Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images},
  author={Manuel Watter and Jost Tobias Springenberg and Joschka Boedecker and Martin A. Riedmiller},
  journal={ArXiv},
  year={2015},
  volume={abs/1506.07365}
}
We introduce Embed to Control (E2C), a method for model learning and control of non-linear dynamical systems from raw pixel images. E2C consists of a deep generative model, belonging to the family of variational autoencoders, that learns to generate image trajectories from a latent space in which the dynamics is constrained to be locally linear. Our model is derived directly from an optimal control formulation in latent space, supports long-term prediction of image sequences and exhibits strong… 

Unsupervised Learning of Lagrangian Dynamics from Images for Prediction and Control

A new unsupervised neural network model is introduced that learns Lagrangian dynamics from images, with interpretability that benefits prediction and control and enables long-term prediction of dynamics in the image space and synthesis of energy-based controllers.

Dynamic Variational Autoencoders for Visual Process Modeling

  • A. SagelHao Shen
  • Computer Science
    ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2020
This work proposes a joint learning framework, combining a vector autoregressive model and a Variational Autoencoders to simultaneously learn a non-linear observation as well as a linear state model from sequences of frames, and validate this approach on synthesis of artificial sequences and dynamic textures.

Robust Locally-Linear Controllable Embedding

A new model for learning robust locally-linear controllable embedding (RCE) is presented, which directly estimates the predictive conditional density of the future observation given the current one, while introducing the bottleneck between the current and future observations.

Linearizing Visual Processes with Convolutional Variational Autoencoders

A joint learning framework is proposed, combining a Linear Dynamic System and a Variational Autoencoder with convolutional layers to simultaneously learn the non-linear observation as well as the linear state-transition from a sequence of observed frames.

Prediction, Consistency, Curvature: Representation Learning for Locally-Linear Control

To make PCC tractable, an amortized variational bound for the PCC loss function is derived and it is demonstrated that the new variational-PCC learning algorithm benefits from significantly more stable and reproducible training, and leads to superior control performance.

Predictive Coding for Locally-Linear Control

This paper proposes a novel information-theoretic LCE approach and shows theoretically that explicit next-observation prediction can be replaced with predictive coding, and uses predictive coding to develop a decoder-free LCE model whose latent dynamics are amenable to locally-linear control.

No Representation without Transformation

This work extends the framework of variational autoencoders to represent transformations explicitly in the latent space and shows that the inferred latent transformations reflect interpretable properties in the observation space.

DeepKoCo: Efficient latent planning with an invariant Koopman representation

A novel model-based agent that learns a latent Koopman representation from images that allows DeepKoCo to plan efficiently using linear control methods, such as linear model predictive control, making the proposed agent more amenable for real-life applications.

Equivariant Deep Dynamical Model for Motion Prediction

This paper proposes an SO(3) equivariant deep dynamical model (EqDDM) for motion prediction that learns a structured representation of the input space in the sense that the embedding varies with symmetry transformations.

Learning Variational Latent Dynamics: Towards Model-based Imitation and Control

The proposed approach leverages the progress in variational-bayes and sequence modeling, extracting a low-dimensional latent space so the dynamical relations of interest can be compactly represented and learned.
...

References

SHOWING 1-10 OF 52 REFERENCES

From Pixels to Torques: Policy Learning with Deep Dynamical Models

This paper introduces a data-efficient, model-based reinforcement learning algorithm that learns a closed-loop control policy from pixel information only, and facilitates fully autonomous learning from pixels to torques.

NICE: Non-linear Independent Components Estimation

We propose a deep learning framework for modeling complex high-dimensional densities called Non-linear Independent Component Estimation (NICE). It is based on the idea that a good representation is

Deep AutoRegressive Networks

An efficient approximate parameter estimation method based on the minimum description length (MDL) principle is derived, which can be seen as maximising a variational lower bound on the log-likelihood, with a feedforward neural network implementing approximate inference.

DRAW: A Recurrent Neural Network For Image Generation

The Deep Recurrent Attentive Writer neural network architecture for image generation substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.

Deep auto-encoder neural networks in reinforcement learning

A framework for combining the training of deep auto-encoders (for learning compact feature spaces) with recently-proposed batch-mode RL algorithms ( for learning policies) is proposed and an emphasis is put on the data-efficiency and on studying the properties of the feature spaces automatically constructed by the deep Auto-encoder neural networks.

Auto-Encoding Variational Bayes

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

Autonomous Learning of State Representations for Control: An Emerging Field Aims to Autonomously Learn State Representations for Reinforcement Learning Agents from Their Real-World Sensor Observations

An emerging field that aims for autonomous reinforcement learning directly on sensor-observations, and two approaches to learn intermediate state representations from previous experiences: deep auto-encoders and slow-feature analysis are reviewed.

Latent Kullback Leibler Control for Continuous-State Systems using Probabilistic Graphical Models

This paper proposes to embed a KL control problem in a probabilistic graphical model where observed variables correspond to the continuous (possibly high-dimensional) state of the system and latent variable correspond to a discrete representation of the state amenable for KL control computation.

Learning of Non-Parametric Control Policies with High-Dimensional State Features

This paper develops a policy search algorithm that integrates robust policy updates and kernel embeddings and can learn nonparametric control policies for infinite horizon continuous MDPs with high-dimensional sensory representations.

Learning Stochastic Recurrent Networks

The proposed model is a generalisation of deterministic recurrent neural networks with latent variables, resulting in Stochastic Recurrent Networks (STORNs), and is evaluated on four polyphonic musical data sets and motion capture data.
...