• Corpus ID: 247940272

Likelihood-Free Frequentist Inference: Confidence Sets with Correct Conditional Coverage

@inproceedings{Dalmasso2021LikelihoodFreeFI,
  title={Likelihood-Free Frequentist Inference: Confidence Sets with Correct Conditional Coverage},
  author={Niccol{\`o} Dalmasso and Luca Masserano and Dave Zhao and Rafael Izbicki and Ann B. Lee},
  year={2021}
}
Many areas of science make extensive use of computer simulators that implicitly encode likelihood functions of complex systems. Classical statistical methods are poorly suited for these so-called likelihood-free inference (LFI) settings, particularly outside asymptotic and low-dimensional regimes. Although new machine learning methods, such as normalizing flows, have revolutionized the sample efficiency and capacity of LFI methods, it remains an open question whether they produce confidence sets… 

Simulation-Based Inference with WALDO: Perfectly Calibrated Confidence Regions Using Any Prediction or Posterior Estimation Algorithm

TLDR
WALDO is presented, a novel method for constructing correctly calibrated confidence regions in SBI that reframes the well-known Wald test and uses Neyman inversion to convert point predictions and posteriors from any prediction or posterior estimation algorithm to confidence sets with correct conditional coverage, even for finite sample sizes.

Fast Optimal Estimation with Intractable Models using Permutation-Invariant Neural Networks

TLDR
This paper uses a decision-theoretic framework to argue that permutation-invariant neural networks are ideally placed for constructing Bayes estimators for arbitrary models, provided that simulation from these models is straightforward.

References

SHOWING 1-10 OF 100 REFERENCES

Confidence Sets and Hypothesis Testing in a Likelihood-Free Inference Setting

TLDR
A frequentist approach to LFI is presented that first formulates the classical likelihood ratio test (LRT) as a parametrized classification problem, and then uses the equivalence of tests and confidence sets to build confidence regions for parameters of interest.

Likelihood-Free Inference by Ratio Estimation

TLDR
An alternative inference approach that is as easy to use as synthetic likelihood but not as restricted in its assumptions, and that, in a natural way, enables automatic selection of relevant summary statistic from a large set of candidates is presented.

Mining gold from implicit models to improve likelihood-free inference

TLDR
Inference techniques for this case are presented that combine the insight that additional latent information can be extracted from the simulator with the power of neural networks in regression and density estimation tasks, leading to better sample efficiency and quality of inference.

Universal inference

TLDR
A surprisingly simple method for producing statistical significance statements without any regularity conditions and it is shown that in settings when computing the MLE is hard, for the purpose of constructing valid tests and intervals, it is sufficient to upper bound the maximum likelihood.

Approximating Likelihood Ratios with Calibrated Discriminative Classifiers

TLDR
It is shown that likelihood ratios are invariant under a specific class of dimensionality reduction maps, and that discriminative classifiers can be used to approximate the generalized likelihood ratio statistic when only a generative model for the data is available.

Gaussian Universal Likelihood Ratio Testing

TLDR
This work presents the first in-depth exploration of the size, power, and relationships between several universal LRT variants, and shows that a repeated subsampling approach is the best choice in terms of size and power.

Bayesian Optimization for Likelihood-Free Inference of Simulator-Based Statistical Models

TLDR
This paper proposes a strategy which combines probabilistic modeling of the discrepancy with optimization to facilitate likelihood-free inference and is shown to accelerate the inference through a reduction in the number of required simulations by several orders of magnitude.

Likelihood-free MCMC with Amortized Approximate Ratio Estimators

TLDR
It is demonstrated that the learned ratio estimator can be embedded in MCMC samplers to approximate likelihood-ratios between consecutive states in the Markov chain, allowing us to draw samples from the intractable posterior.

An Extended Empirical Saddlepoint Approximation for Intractable Likelihoods

TLDR
A novel, more flexible, density estimator is proposed: the Extended Empirical Saddlepoint approximation, which is able to capture large departures from normality, while being scalable to high dimensions, and this in turn leads to more accurate parameter estimates, relative to the Gaussian alternative.

ABC random forests for Bayesian parameter inference

TLDR
This work proposes to conduct likelihood-free Bayesian inferences about parameters with no prior selection of the relevant components of the summary statistics and bypassing the derivation of the associated tolerance level using the random forest methodology of Breiman (2001).
...