• Corpus ID: 49568863

Neural Processes

@article{Garnelo2018NeuralP,
  title={Neural Processes},
  author={Marta Garnelo and Jonathan Schwarz and Dan Rosenbaum and Fabio Viola and Danilo Jimenez Rezende and S. M. Ali Eslami and Yee Whye Teh},
  journal={ArXiv},
  year={2018},
  volume={abs/1807.01622}
}
A neural network (NN) is a parameterised function that can be tuned via gradient descent to approximate a labelled collection of data with high precision. A Gaussian process (GP), on the other hand, is a probabilistic model that defines a distribution over possible functions, and is updated in light of data via the rules of probabilistic inference. GPs are probabilistic, data-efficient and flexible, however they are also computationally intensive and thus limited in their applicability. We… 

Figures and Tables from this paper

Bootstrapping Neural Processes

TLDR
The bootstrap is a classical data-driven technique for estimating uncertainty, which allows BNP to learn the stochasticity in NPs without assuming a particular form, and the efficacy of BNP on various types of data and its robustness in the presence of model-data mismatch are demonstrated.

Residual Neural Processes

TLDR
This paper proposes a simple yet effective remedy; the Residual Neural Process (RNP) that leverages traditional BLL for faster training and better prediction, and demonstrates that the RNP shows faster convergence and better performance, both qualitatively and quantitatively.

Meta-Learning Priors for Efficient Online Bayesian Regression

TLDR
The proposed ALPaCA is found to be a promising plug-in tool for many regression tasks in robotics where scalability and data-efficiency are important, and outperforms kernel-based GP regression, as well as state of the art meta-learning approaches.

Learning to Estimate Point-Prediction Uncertainty and Correct Output in Neural Networks

TLDR
A new framework called RIO is developed that makes it possible to estimate uncertainty in any pretrained standard NN without modifications to model architecture or training pipeline, and provides an important ingredient in building real-world applications of NNs.

Global Convolutional Neural Processes

TLDR
A member GloBal Convolutional Neural Process (GBCoNP) is built that achieves the SOTA log-likelihood in latent NPFs and manipulation of the global uncertainty enables the probability evaluation on the functional priors.

Transforming Gaussian Processes With Normalizing Flows

TLDR
A variational approximation to the resulting Bayesian inference problem is derived, which is as fast as stochastic variational GP regression and makes the model a computationally efficient alternative to other hierarchical extensions of GP priors.

Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel

TLDR
A new framework (RIO) is developed that makes it possible to estimate uncertainty in any pretrained standard NN without modifications to model architecture or training pipeline, and provides an important ingredient for building real-world NN applications.

Wasserstein Neural Processes

TLDR
It is shown that there are desirable classes of problems where NPs, with this loss of maximum likelihood, fail to learn any reasonable distribution, and this drawback is solved by using approximations of Wasserstein distance.

Neural Clustering Processes

TLDR
This work introduces deep network architectures trained with labeled samples from any generative model of clustered datasets, and develops two complementary approaches to this task, requiring either O(N) or O(K) network forward passes per dataset.

VFunc: a Deep Generative Model for Functions

TLDR
A deep generative model for functions that provides a joint distribution p(f, z) over functions f and latent variables z which lets us efficiently sample from the marginal p( f) and maximize a variational lower bound on the entropy H(f).
...

References

SHOWING 1-10 OF 45 REFERENCES

Conditional Neural Processes

TLDR
Conditional Neural Processes are inspired by the flexibility of stochastic processes such as GPs, but are structured as neural networks and trained via gradient descent, yet scale to complex functions and large datasets.

Manifold Gaussian Processes for regression

TLDR
Manifold Gaussian Processes is a novel supervised method that jointly learns a transformation of the data into a feature space and a GP regression from the feature space to observed space, which allows to learn data representations, which are useful for the overall regression task.

Deep Gaussian Processes

TLDR
Deep Gaussian process (GP) models are introduced and model selection by the variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.

Weight Uncertainty in Neural Networks

TLDR
This work introduces a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop, and shows how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems.

Stochastic Backpropagation and Approximate Inference in Deep Generative Models

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.

Probabilistic Model-Agnostic Meta-Learning

TLDR
This paper proposes a probabilistic meta-learning algorithm that can sample models for a new task from a model distribution that is trained via a variational lower bound, and shows how reasoning about ambiguity can also be used for downstream active learning problems.

Towards a Neural Statistician

TLDR
An extension of a variational autoencoder that can learn a method for computing representations, or statistics, of datasets in an unsupervised fashion is demonstrated that is able to learn statistics that can be used for clustering datasets, transferring generative models to new datasets, selecting representative samples of datasets and classifying previously unseen classes.

Differentiable Compositional Kernel Learning for Gaussian Processes

TLDR
The Neural Kernel Network (NKN), a flexible family of kernels represented by a neural network, is presented, which is based on the composition rules for kernels, so that each unit of the network corresponds to a valid kernel.

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning