Contrastive Neural Ratio Estimation

  title={Contrastive Neural Ratio Estimation},
  author={Benjamin Kurt Miller and Christoph Weniger and Patrick Forr'e},
Likelihood-to-evidence ratio estimation is usually cast as either a binary ( NRE - A ) or a multiclass ( NRE - B ) classification task. In contrast to the binary classification framework, the current formulation of the multiclass version has an intrinsic and unknown bias term, making otherwise informative diagnostics unreliable. We propose a multiclass framework free from the bias inherent to NRE - B at optimum, leaving us in the position to run diagnostics that practitioners depend on. It also… 



Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency

It is shown that the ranking-based variant of NCE gives consistent parameter estimates under weaker assumptions than the classification-based method, which is closely related to negative sampling methods, now widely used in NLP.

Truncated Marginal Neural Ratio Estimation

This work presents a neural simulator-based inference algorithm which simultaneously offers simulation efficiency and fast empirical posterior testability, which is unique among modern algorithms.

On Contrastive Learning for Likelihood-free Inference

This work shows that two popular likelihood-free approaches to parameter inference in stochastic simulator models can be unified under a general contrastive learning scheme, and clarify how they should be run and compared.

High-Dimensional Density Ratio Estimation with Extensions to Approximate Likelihood Computation

This work proposes a simple-toimplement, fully nonparametric density ratio estimator that expands the ratio in terms of the eigenfunctions of a kernel-based operator; these functions reflect the underlying geometry of the data, often leading to better estimates without an explicit dimension reduction step.

GATSBI: Generative Adversarial Training for Simulation-Based Inference

GATSBI opens up opportunities for leveraging advances in GANs to perform Bayesian inference on high-dimensional simulation-based models, and shows how GATSBI can be extended to perform sequential posterior estimation to focus on individual observations.

Noise-contrastive estimation: A new estimation principle for unnormalized statistical models

A new estimation principle is presented to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise, using the model log-density function in the regression nonlinearity, which leads to a consistent (convergent) estimator of the parameters.

Noise-Contrastive Estimation of Unnormalized Statistical Models, with Applications to Natural Image Statistics

The basic idea is to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise and it is shown that the new method strikes a competitive trade-off in comparison to other estimation methods for unnormalized models.

Likelihood-free MCMC with Amortized Approximate Ratio Estimators

It is demonstrated that the learned ratio estimator can be embedded in MCMC samplers to approximate likelihood-ratios between consecutive states in the Markov chain, allowing us to draw samples from the intractable posterior.

Fast and credible likelihood-free cosmology with truncated marginal neural ratio estimation

It is shown that tmnre can achieve converged posteriors using orders of magnitude fewer simulator calls than conventional Markov Chain Monte Carlo (mcmc) methods, and in these examples the required number of samples is effectively independent of the number of nuisance parameters.