# Contrastive Neural Ratio Estimation

@article{Miller2022ContrastiveNR, title={Contrastive Neural Ratio Estimation}, author={Benjamin Kurt Miller and Christoph Weniger and Patrick Forr'e}, journal={ArXiv}, year={2022}, volume={abs/2210.06170} }

Likelihood-to-evidence ratio estimation is usually cast as either a binary ( NRE - A ) or a multiclass ( NRE - B ) classiﬁcation task. In contrast to the binary classiﬁcation framework, the current formulation of the multiclass version has an intrinsic and unknown bias term, making otherwise informative diagnostics unreliable. We propose a multiclass framework free from the bias inherent to NRE - B at optimum, leaving us in the position to run diagnostics that practitioners depend on. It also…

## References

SHOWING 1-10 OF 75 REFERENCES

### Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency

- Computer ScienceEMNLP
- 2018

It is shown that the ranking-based variant of NCE gives consistent parameter estimates under weaker assumptions than the classification-based method, which is closely related to negative sampling methods, now widely used in NLP.

### Truncated Marginal Neural Ratio Estimation

- Computer ScienceNeurIPS
- 2021

This work presents a neural simulator-based inference algorithm which simultaneously offers simulation efficiency and fast empirical posterior testability, which is unique among modern algorithms.

### On Contrastive Learning for Likelihood-free Inference

- Computer Science, MathematicsICML
- 2020

This work shows that two popular likelihood-free approaches to parameter inference in stochastic simulator models can be unified under a general contrastive learning scheme, and clarify how they should be run and compared.

### High-Dimensional Density Ratio Estimation with Extensions to Approximate Likelihood Computation

- Computer ScienceAISTATS
- 2014

This work proposes a simple-toimplement, fully nonparametric density ratio estimator that expands the ratio in terms of the eigenfunctions of a kernel-based operator; these functions reflect the underlying geometry of the data, often leading to better estimates without an explicit dimension reduction step.

### GATSBI: Generative Adversarial Training for Simulation-Based Inference

- Computer ScienceICLR
- 2022

GATSBI opens up opportunities for leveraging advances in GANs to perform Bayesian inference on high-dimensional simulation-based models, and shows how GATSBI can be extended to perform sequential posterior estimation to focus on individual observations.

### Improving predictive inference under covariate shift by weighting the log-likelihood function

- Mathematics
- 2000

### Noise-contrastive estimation: A new estimation principle for unnormalized statistical models

- Computer ScienceAISTATS
- 2010

A new estimation principle is presented to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise, using the model log-density function in the regression nonlinearity, which leads to a consistent (convergent) estimator of the parameters.

### Noise-Contrastive Estimation of Unnormalized Statistical Models, with Applications to Natural Image Statistics

- Computer Science, MathematicsJ. Mach. Learn. Res.
- 2012

The basic idea is to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise and it is shown that the new method strikes a competitive trade-off in comparison to other estimation methods for unnormalized models.

### Likelihood-free MCMC with Amortized Approximate Ratio Estimators

- Computer ScienceICML
- 2020

It is demonstrated that the learned ratio estimator can be embedded in MCMC samplers to approximate likelihood-ratios between consecutive states in the Markov chain, allowing us to draw samples from the intractable posterior.

### Fast and credible likelihood-free cosmology with truncated marginal neural ratio estimation

- Computer ScienceJournal of Cosmology and Astroparticle Physics
- 2022

It is shown that tmnre can achieve converged posteriors using orders of magnitude fewer simulator calls than conventional Markov Chain Monte Carlo (mcmc) methods, and in these examples the required number of samples is effectively independent of the number of nuisance parameters.