Universal inference

@article{Wasserman2020UniversalI,
  title={Universal inference},
  author={Larry A. Wasserman and Aaditya Ramdas and Sivaraman Balakrishnan},
  journal={Proceedings of the National Academy of Sciences},
  year={2020},
  volume={117},
  pages={16880 - 16890}
}
Significance Most statistical methods rely on certain mathematical conditions, known as regularity assumptions, to ensure their validity. Without these conditions, statistical quantities like P values and confidence intervals might not be valid. In this paper we give a surprisingly simple method for producing statistical significance statements without any regularity conditions. The resulting hypothesis tests can be used for any parametric model and for several nonparametric models. We propose… 

Figures from this paper

Likelihood-Free Frequentist Inference: Confidence Sets with Correct Conditional Coverage

TLDR
A practical procedure for the Neyman construction of confidence sets with nominal coverage, and diagnostics that estimate conditional coverage over the entire parameter space are presented.

Likelihood-Free Frequentist Inference: Bridging Classical Statistics and Machine Learning in Simulation and Uncertainty Quantification

TLDR
This paper presents a statistical framework for LFI that unifies classical statistics with modern machine learning to construct frequentist confidence sets and hypothesis tests with finite-sample guarantees of nominal coverage and rigorous diagnostics for assessing empirical coverage over the entire parameter space.

On the choice of the splitting ratio for the split likelihood ratio test

The recently introduced framework of universal inference provides a new approach to constructing hypothesis tests and confidence regions that are valid in finite samples and do not rely on any

Universal Inference Meets Random Projections: A Scalable Test for Log-concavity

TLDR
This work finds that the highest power is obtained by using random projections to convert the d-dimensional testing problem into many one-dimensional problems, leading to a simple procedure that is statistically and computationally efficient.

Post-selection inference for e-value based confidence intervals

Suppose that one can construct a valid (1 − 𝛿 ) -CI for each of 𝐾 parameters of potential interest. If a data analyst uses an arbitrary data-dependent criterion to select some subset 𝑆 of

A Note on Likelihood Ratio Tests for Models with Latent Variables.

TLDR
In this note, it is shown how the regularity conditions of Wilks' theorem may be violated using three examples of models with latent variables and a more general theory for LRT is given that provides the correct asymptotic theory for these LRTs.

Order selection with confidence for finite mixture models

The determination of the number of mixture components (the order) of a finite mixture model has been an enduring problem in statistical inference. We prove that the closed testing principle leads to a

Asymptotic Distribution-Free Independence Test for High Dimension Data

TLDR
This paper proposes a general framework for independence testing by first training a classifier that distin-guishes the joint and product distributions, and then testing the signi⬁cance of the fitted classi-cation algorithms, and applies the new test to a single cell data set to test the independence between two types of single cell sequencing measurements.

Estimating means of bounded random variables by betting

TLDR
A general approach for deriving concentration bounds that can be seen as a generalization and improvement of the celebrated Chernoff method is presented, based on a class of composite nonnegative martingales, with strong connections to testing by betting and the method of mixtures.

An imprecise-probabilistic characterization of frequentist statistical inference

Between the two dominant schools of thought in statistics, namely, Bayesian and classical/frequentist, a main difference is that the former is grounded in the mathematically rigorous theory of
...

References

SHOWING 1-10 OF 51 REFERENCES

Robust Bayesian Inference via Coarsening

TLDR
This work introduces a novel approach to Bayesian inference that improves robustness to small departures from the model: rather than conditioning on the event that the observed data are generated by the model, one conditions on theevent that the model generates data close to the observedData, in a distributional sense.

Hypothesis test for normal mixture models: The EM approach

Normal mixture distributions are arguably the most important mixture models, and also the most technically challenging. The likelihood function of the normal mixture model is unbounded based on a set

A UNIVERSALLY CONSISTENT MODIFICATION OF MAXIMUM LIKELIHOOD

In some models, both parametric and not, maximum likelihood estimation fails to be consistent. We investigate why the maximum likelihood method breaks down with some examples and notice the paradox

Gaussian Mixture Clustering Using Relative Tests of Fit.

TLDR
This work considers clustering based on significance tests for Gaussian Mixture Models (GMMs) based on SigClust method, and introduces a new test based on the idea of relative fit that test for whether a mixture of Gaussians provides a better fit relative to a single Gaussian.

Uniform, nonparametric, non-asymptotic confidence sequences

TLDR
This paper develops non-asymptotic confidence sequences that achieve arbitrary precision under nonparametric conditions and strengthens and generalizes existing constructions of finite-time iterated logarithm ("finite LIL") bounds.

On testing marginal versus conditional independence

We consider testing marginal independence versus conditional independence in a trivariate Gaussian setting. The two models are non-nested and their intersection is a union of two marginal

p-Values for High-Dimensional Regression

TLDR
Inference across multiple random splits can be aggregated while maintaining asymptotic control over the inclusion of noise variables, and it is shown that the resulting p-values can be used for control of both family-wise error and false discovery rate.

Maximum likelihood estimation in Gaussian models under total positivity

We analyze the problem of maximum likelihood estimation for Gaussian distributions that are multivariate totally positive of order two (MTP2). By exploiting connections to phylogenetics and

Nonparametric maximum likelihood estimation of survival functions with a general stochastic ordering and its dual

SUMMARY This paper discusses estimation of survival functions under an arbitrary partial stochas- tic ordering of the underlying populations. This technique is especially useful when data are not

Statistical guarantees for the EM algorithm: From population to sample-based analysis

TLDR
A general framework for proving rigorous guarantees on the performance of the EM algorithm and a variant known as gradient EM and consequences of the general theory for three canonical examples of incomplete-data problems are developed.
...