• Corpus ID: 174800134

Amortized Monte Carlo Integration

@article{Goliski2019AmortizedMC,
  title={Amortized Monte Carlo Integration},
  author={Adam Goliński and Frank Wood and Tom Rainforth},
  journal={ArXiv},
  year={2019},
  volume={abs/1907.08082}
}
Current approaches to amortizing Bayesian inference focus solely on approximating the posterior distribution. Typically, this approximation is, in turn, used to calculate expectations for one or more target functions - a computational pipeline which is inefficient when the target function(s) are known upfront. In this paper, we address this inefficiency by introducing AMCI, a method for amortizing Monte Carlo integration directly. AMCI operates similarly to amortized inference but produces… 

Figures from this paper

Target-Aware Bayesian Inference: How to Beat Optimal Conventional Estimators

This work utilizes the TABI framework by combining it with adaptive importance sampling approaches and shows both theoretically and empirically that the resulting estimators are capable of converging faster than the standard O(1/N) Monte Carlo rate, potentially producing rates as fast as O( 1/N).

Decision-Making with Auto-Encoding Variational Bayes

This work describes the error of importance sampling as a function of posterior variance and shows that proposal distributions learned with evidence upper bounds are better than the current state of the art.

Expectation Programming

A particular instantiation of the EPF concept is realized by extending the probabilistic programming language Turing to allow so-called target-aware inference to be run automatically, and it is shown that this leads to significant empirical gains compared to conventional posterior-based inference.

Expectation programming: Adapting probabilistic programming systems to estimate expectations efficiently

A particular instance of the expectation programming concept, Expectation Programming in Turing (EPT), is realized by extending the PPS Turing to allow so-called target-aware inference to be run automatically, and it provides substantial empirical gains in practice.

Variational Determinant Estimation with Spherical Normalizing Flows

The Variational Determinant Estimator (VDE), a variational extension of the recently proposed determinant estimator discovered by Sohl-Dickstein (2020), significantly reduces the variance even for low sample sizes by combining (importance-weighted) variational inference and a family of normalizing flows which allow density estimation on hyperspheres.

Faithful Inversion of Generative Models for Effective Amortized Inference

This work introduces an algorithm for faithfully, and minimally, inverting the graphical model structure of any generative model and proves the correctness of the approach and empirically shows that the resulting minimally faithful inverses lead to better inference amortization than existing heuristic approaches.

Learning Stochastic Inverses

The Inverse MCMC algorithm is described, which uses stochastic inverses to make block proposals for a Metropolis-Hastings sampler, and the efficiency of this sampler for a variety of parameter regimes and Bayes nets is explored.

Deep Amortized Inference for Probabilistic Programs

A system for amortized inference in PPLs is proposed in the form of a parameterized guide program, which explores in detail the common machine learning pattern in which a 'local' model is specified by 'global' random values and used to generate independent observed data points; this gives rise to amortization local inference supporting global model learning.

On Nesting Monte Carlo Estimators

The statistical implications of nesting MC estimators, including cases of multiple levels of nesting, are investigated, and corresponding rates of convergence are derived and empirical evidence that these rates are observed in practice is provided.

Inference Networks for Sequential Monte Carlo in Graphical Models

A procedure for constructing and learning a structured neural network which represents an inverse factorization of the graphical model, resulting in a conditional density estimator that takes as input particular values of the observed random variables, and returns an approximation to the distribution of the latent variables.

Inference Trees: Adaptive Inference with Exploration

We introduce inference trees (ITs), a new class of inference methods that build on ideas from Monte Carlo tree search to perform adaptive sampling in a manner that balances exploration with

Variational Inference with Normalizing Flows

It is demonstrated that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference.

Advances in Importance Sampling

The basic IS algorithm is described and the recent advances in this methodology are revisited, focusing on multiple IS (MIS), the case where more than one proposal is available.

Approximate inference for the loss-calibrated Bayesian

This work proposes an EM-like algorithm on the Bayesian posterior risk and shows how it can improve a standard approach to Gaussian process classication when losses are asymmetric.

Methods for Approximating Integrals in Statistics with Special Emphasis on Bayesian Integration Problems

This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic