• Corpus ID: 222208683

Uncertainty in Neural Processes

@article{Naderiparizi2020UncertaintyIN,
  title={Uncertainty in Neural Processes},
  author={Saeid Naderiparizi and Ke-Li Chiu and Benjamin Bloem-Reddy and Frank D. Wood},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.03753}
}
We explore the effects of architecture and training objective choice on amortized posterior predictive inference in probabilistic conditional generative models. We aim this work to be a counterpoint to a recent trend in the literature that stresses achieving good samples when the amount of conditioning data is large. We instead focus our attention on the case where the amount of conditioning data is small. We highlight specific architecture and objective choices that we find lead to qualitative… 

Evidential Conditional Neural Processes

The Evidential Conditional Neural Pro- cesses (ECNP), which replace the standard Gaussian distribution used by CNP with a much richer hierarchical Bayesian structure through evidential learning to achieve epistemic-aleatoric uncertainty decomposition, is proposed.

References

SHOWING 1-10 OF 21 REFERENCES

Probing Uncertainty Estimates of Neural Processes

This work analyzes the uncertainty estimates obtained via neural processes by proposing a series of metrics that probe the model along various interpretable axis, which can be useful for model criticism and selection with respect to new tasks and datasets.

Conditional Neural Processes

Conditional Neural Processes are inspired by the flexibility of stochastic processes such as GPs, but are structured as neural networks and trained via gradient descent, yet scale to complex functions and large datasets.

Empirical Evaluation of Neural Process Objectives

This abstract empirically evaluates the performance of NPs for different objectives and model specifications and finds that some objectives andmodel specifications clearly outperform others.

A Note on the Inception Score

New insights are provided into the Inception Score, a recently proposed and widely used evaluation metric for generative models, and it is demonstrated that it fails to provide useful guidance when comparing models.

Importance Weighted Hierarchical Variational Inference

This work introduces a new family of variational upper bounds on a marginal log density in the case of hierarchical models (also known as latent variable models) and derives a family of increasingly tighter variational lower bounds on the otherwise intractable standard evidence lower bound for hierarchical variational distributions, enabling the use of more expressive approximate posteriors.

Meta-Learning Probabilistic Inference for Prediction

VERSA is introduced, an instance of the framework employing a flexible and versatile amortization network that takes few-shot learning datasets as inputs, with arbitrary numbers of shots, and outputs a distribution over task-specific parameters in a single forward pass, amortizing the cost of inference and relieving the need for second derivatives during training.

Rethinking the Inception Architecture for Computer Vision

This work is exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.

Improved Techniques for Training GANs

This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.

GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score.

Amortized Inference in Probabilistic Reasoning

It is argued that the brain oper- ates in the setting of amortized inference, where numerous related queries must be answered (e.g., recognizing a scene from multiple viewpoints); in this setting, memoryless algo- rithms can be computationally wasteful.