• Corpus ID: 231985499

Meta-Learning Dynamics Forecasting Using Task Inference

  title={Meta-Learning Dynamics Forecasting Using Task Inference},
  author={Rui Wang and Robin Walters and Rose Yu},
Current deep learning models for dynamics forecasting struggle with generalization. They can only forecast in a specific domain and fail when applied to systems with different parameters, external forces, or boundary conditions. We propose a model-based meta-learning method called DyAd which can generalize across heterogeneous domains by partitioning them into different tasks. DyAd has two parts: an encoder which infers the time-invariant hidden features of the task with weak supervision, and a… 

Generalizing to New Physical Systems via Context-Informed Dynamics Model

A new framework for context-informed dynamics adaptation (CoDA), which takes into account the dis-tributional shift across systems for fast andcient adaptation to new dynamics, and shows state-of-the-art generalization results on a set of nonlinear dynamics.

Data Augmentation vs. Equivariant Networks: A Theory of Generalization on Dynamics Forecasting

This work derives the generalization bounds for data augmentation and equivariant networks, characterizing their effect on learning in a unified framework and focuses on non-stationary dynamics forecasting with complex temporal dependen-cies.



Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning

Forecasting Sequential Data using Consistent Koopman Autoencoders

This work proposes a novel Consistent Koopman Autoencoder model which leverages the forward and backward dynamics, and achieves accurate estimates for significant prediction horizons, while also being robust to noise.

Physics-aware Spatiotemporal Modules with Auxiliary Tasks for Meta-Learning

This paper proposes a framework, physics-aware modular meta-learning with auxiliary tasks (PiMetaL) whose spatial modules incorporate PDE-independent knowledge and temporal modules are rapidly adaptable to the limited data, respectively, and mitigates the need for a large number of real-world tasks for meta- learning by leveraging simulated data.

N-BEATS: Neural basis expansion analysis for interpretable time series forecasting

The proposed deep neural architecture based on backward and forward residual links and a very deep stack of fully-connected layers has a number of desirable properties, being interpretable, applicable without modification to a wide array of target domains, and fast to train.

PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning

PredRNN, a new recurrent network, in which a pair of memory cells are explicitly decoupled, operate in nearly independent transition manners, and finally form unified representations of the complex environment, is presented.

Learning Dynamical Systems from Partial Observations

This work proposes a natural data-driven framework, where the system's dynamics are modelled by an unknown time-varying differential equation, and the evolution term is estimated from the data, using a neural network, to forecasting complex, nonlinear space-time processes.

Learning Stable Deep Dynamics Models

It is shown that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics, such as video textures, in a fully end-to-end fashion.

Hierarchically Structured Meta-learning

A hierarchically structured meta-learning (HSML) algorithm that explicitly tailors the transferable knowledge to different clusters of tasks, inspired by the way human beings organize knowledge, and extends the hierarchical structure to a continual learning environment.

Deep learning for physical processes: incorporating prior scientific knowledge

It is shown how general background knowledge gained from the physics could be used as a guideline for designing efficient deep learning models and a formal link between the solution of a class of differential equations underlying a large family of physical phenomena and the proposed model is demonstrated.

Meta-Learning with Latent Embedding Optimization

This work shows that latent embedding optimization can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks, and indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.