# Neural Processes

@article{Garnelo2018NeuralP, title={Neural Processes}, author={Marta Garnelo and Jonathan Schwarz and Dan Rosenbaum and Fabio Viola and Danilo Jimenez Rezende and S. M. Ali Eslami and Yee Whye Teh}, journal={ArXiv}, year={2018}, volume={abs/1807.01622} }

A neural network (NN) is a parameterised function that can be tuned via gradient descent to approximate a labelled collection of data with high precision. A Gaussian process (GP), on the other hand, is a probabilistic model that defines a distribution over possible functions, and is updated in light of data via the rules of probabilistic inference. GPs are probabilistic, data-efficient and flexible, however they are also computationally intensive and thus limited in their applicability. We…

## Figures, Tables, and Topics from this paper

## 122 Citations

Residual Neural Processes

- Computer ScienceAAAI
- 2020

This paper proposes a simple yet effective remedy; the Residual Neural Process (RNP) that leverages traditional BLL for faster training and better prediction, and demonstrates that the RNP shows faster convergence and better performance, both qualitatively and quantitatively.

Learning to Estimate Point-Prediction Uncertainty and Correct Output in Neural Networks

- 2019

Neural Networks (NNs) have been extensively used for a wide spectrum of realworld regression tasks, where the goal is to predict a numerical outcome such as revenue, effectiveness, or a quantitative…

Global Convolutional Neural Processes

- Computer ScienceArXiv
- 2021

A member GloBal Convolutional Neural Process (GBCoNP) is built that achieves the SOTA log-likelihood in latent NPFs and manipulation of the global uncertainty enables the probability evaluation on the functional priors.

Quantifying Point-Prediction Uncertainty in Neural Networks via Residual Estimation with an I/O Kernel

- Computer Science, MathematicsICLR
- 2020

A new framework (RIO) is developed that makes it possible to estimate uncertainty in any pretrained standard NN without modifications to model architecture or training pipeline, and provides an important ingredient for building real-world NN applications.

Wasserstein Neural Processes

- Computer Science, MathematicsArXiv
- 2019

It is shown that there are desirable classes of problems where NPs, with this loss of maximum likelihood, fail to learn any reasonable distribution, and this drawback is solved by using approximations of Wasserstein distance.

VFunc: a Deep Generative Model for Functions

- Computer Science, MathematicsArXiv
- 2018

A deep generative model for functions that provides a joint distribution p(f, z) over functions f and latent variables z which lets us efficiently sample from the marginal p( f) and maximize a variational lower bound on the entropy H(f).

Recurrent Attentive Neural Process for Sequential Data

- Computer Science, MathematicsArXiv
- 2019

The proposed Recurrent Attentive Neural Process (RANP) encapsulates both the inductive biases of recurrent neural networks and also the strength of NPs for modelling uncertainty and can effectively model sequential data and outperforms NPs and LSTMs remarkably in a 1D regression toy example as well as autonomous-driving applications.

Meta Learning as Bayes Risk Minimization

- Computer Science, MathematicsArXiv
- 2020

A probabilistic framework is used to formalize what it means for two tasks to be related and reframe the meta-learning problem into the problem of Bayesian risk minimization (BRM).

Meta-Learning Acquisition Functions for Bayesian Optimization

- Mathematics, Computer ScienceArXiv
- 2019

This work proposes a method to meta-learn customized optimizers within the well-established framework of Bayesian optimization (BO), allowing the algorithm to utilize the proven generalization capabilities of Gaussian processes.

Neural Likelihoods for Multi-Output Gaussian Processes

- Mathematics, Computer ScienceArXiv
- 2019

Flexible likelihoods are constructed for multi-output Gaussian process models that leverage neural networks as components that can admit analytic predictive means even when the likelihood is non-linear and the predictive distributions are non-Gaussian.

## References

SHOWING 1-10 OF 46 REFERENCES

Conditional Neural Processes

- Computer Science, MathematicsICML
- 2018

Conditional Neural Processes are inspired by the flexibility of stochastic processes such as GPs, but are structured as neural networks and trained via gradient descent, yet scale to complex functions and large datasets.

Manifold Gaussian Processes for regression

- Mathematics, Computer Science2016 International Joint Conference on Neural Networks (IJCNN)
- 2016

Manifold Gaussian Processes is a novel supervised method that jointly learns a transformation of the data into a feature space and a GP regression from the feature space to observed space, which allows to learn data representations, which are useful for the overall regression task.

Deep Gaussian Processes

- Mathematics, Computer ScienceAISTATS
- 2013

Deep Gaussian process (GP) models are introduced and model selection by the variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.

Weight Uncertainty in Neural Networks

- Mathematics, Computer ScienceArXiv
- 2015

This work introduces a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop, and shows how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems.

Stochastic Backpropagation and Approximate Inference in Deep Generative Models

- Computer Science, MathematicsICML
- 2014

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and…

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

- Mathematics, Computer ScienceICML
- 2016

A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.

Probabilistic Model-Agnostic Meta-Learning

- Computer Science, MathematicsNeurIPS
- 2018

This paper proposes a probabilistic meta-learning algorithm that can sample models for a new task from a model distribution that is trained via a variational lower bound, and shows how reasoning about ambiguity can also be used for downstream active learning problems.

Towards a Neural Statistician

- Computer Science, MathematicsICLR
- 2017

An extension of a variational autoencoder that can learn a method for computing representations, or statistics, of datasets in an unsupervised fashion is demonstrated that is able to learn statistics that can be used for clustering datasets, transferring generative models to new datasets, selecting representative samples of datasets and classifying previously unseen classes.

Differentiable Compositional Kernel Learning for Gaussian Processes

- Computer Science, MathematicsICML
- 2018

The Neural Kernel Network (NKN), a flexible family of kernels represented by a neural network, is presented, which is based on the composition rules for kernels, so that each unit of the network corresponds to a valid kernel.

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

- Computer ScienceICML
- 2017

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning…