• Corpus ID: 233444149

Meta-learning using privileged information for dynamics

@article{Day2021MetalearningUP,
  title={Meta-learning using privileged information for dynamics},
  author={Ben Day and Alexander Norcliffe and Jacob Moss and Pietro Lio’},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.14290}
}
Neural ODE Processes approach the problem of meta-learning for dynamics using a latent variable model, which permits a flexible aggregation of contextual information. This flexibility is inherited from the Neural Process framework and allows the model to aggregate sets of context observations of arbitrary size into a fixedlength representation. In the physical sciences, we often have access to structured knowledge in addition to raw observations of a system, such as the value of a conserved… 

Figures and Tables from this paper

On Second Order Behaviour in Augmented Neural ODEs

This work shows how the adjoint sensitivity method can be extended to SONODEs and proves that the optimisation of a first order coupled ODE is equivalent and computationally more efficient, and extends the theoretical understanding of the broader class of Augmented NODEs by showing they can also learn higher order dynamics with a minimal number of augmented dimensions, but at the cost of interpretability.

References

SHOWING 1-10 OF 16 REFERENCES

Neural Ordinary Differential Equations

This work shows how to scalably backpropagate through any ODE solver, without access to its internal operations, which allows end-to-end training of ODEs within larger models.

Neural ODE Processes

By maintaining an adaptive data-dependent distribution over the underlying ODE, this model can successfully capture the dynamics of low-dimensional systems from just a few data-points and scale up to challenging high-dimensional time-series with unknown latent dynamics such as rotating MNIST digits.

Conditional Neural Processes

Conditional Neural Processes are inspired by the flexibility of stochastic processes such as GPs, but are structured as neural networks and trained via gradient descent, yet scale to complex functions and large datasets.

Meta-Learning in Neural Networks: A Survey

A new taxonomy is proposed that provides a more comprehensive breakdown of the space of meta-learning methods today and surveys promising applications and successes ofMeta-learning such as few-shot learning and reinforcement learning.

Deep Learning Under Privileged Information Using Heteroscedastic Dropout

This work proposes to use a heteroscedastic dropout and make the variance of the dropout a function of privileged information and significantly increases the sample efficiency during learning, resulting in higher accuracy with a large margin when the number of training examples is limited.

Empirical Evaluation of Neural Process Objectives

This abstract empirically evaluates the performance of NPs for different objectives and model specifications and finds that some objectives andmodel specifications clearly outperform others.

Mind the Nuisance: Gaussian Process Classification using Privileged Noise

It is shown that privileged information can naturally be treated as noise in the latent function of a Gaussian process classifier (GPC), and that advanced neural networks and deep learning methods can be compressed as privileged information.

On Second Order Behaviour in Augmented Neural ODEs

This work shows how the adjoint sensitivity method can be extended to SONODEs and proves that the optimisation of a first order coupled ODE is equivalent and computationally more efficient, and extends the theoretical understanding of the broader class of Augmented NODEs by showing they can also learn higher order dynamics with a minimal number of augmented dimensions, but at the cost of interpretability.

Neural Controlled Differential Equations for Irregular Time Series

The resultingural controlled differential equation model is directly applicable to the general setting of partially-observed irregularly-sampled multivariate time series, and (unlike previous work on this problem) it may utilise memory-efficient adjoint-based backpropagation even across observations.

Accurate Uncertainties for Deep Learning Using Calibrated Regression

This work proposes a simple procedure for calibrating any regression algorithm, and finds that it consistently outputs well-calibrated credible intervals while improving performance on time series forecasting and model-based reinforcement learning tasks.