On Some Principles of Statistical Inference

@article{Reid2015OnSP,
  title={On Some Principles of Statistical Inference},
  author={Nancy Reid and D. R. Cox},
  journal={International Statistical Review},
  year={2015},
  volume={83},
  pages={293 - 308}
}
  • N. Reid, D. Cox
  • Published 1 August 2015
  • Computer Science
  • International Statistical Review
Statistical theory aims to provide a foundation for studying the collection and interpretation of data, a foundation that does not depend on the particular details of the substantive field in which the data are being considered. This gives a systematic way to approach new problems, and a common language for summarising results; ideally, the foundations and common language ensure that statistical aspects of one study, or of several studies on closely related phenomena, can be broadly accessible… 
Validity and the foundations of statistical inference
TLDR
A demonstration that the inferential model framework meets the proposed criteria for valid and prior-free statistical inference, thereby solving perhaps the most important unsolved problem in statistics.
Direct and approximately valid probabilistic inference on a class of statistical functionals
TLDR
A generalized inferential model (IM) framework for direct probabilistic uncertainty quantification on the quantity of interest is developed and it is proved that this new approach provides approximately valid inference in the sense that the plausibility values assigned to hypotheses about the unknowns are asymptotically well-calibrated in a frequentist sense.
EDITORIAL: Statistical significance, P-values, and replicability
TLDR
A Task Force was convened to prepare a statement to clarify the role of hypothesis tests, p-values, and their relation to replicability, and remarkable unanimity was achieved.
Validity-Preservation Properties of Rules for Combining Inferential Models
TLDR
The best strategy currently available is one that combines via a certain dimension reduction step before the inferential model construction, and this paper concludes that this strategy is currently available.
Trustworthiness of statistical inference
  • D. Hand
  • Philosophy
    Journal of the Royal Statistical Society: Series A (Statistics in Society)
  • 2021
We examine the role of trustworthiness and trust in statistical inference, arguing that it is the extent of trustworthiness in inferential statistical tools which enables trust in the conclusions.
Possibility Measures for Valid Statistical Inference Based on Censored Data
TLDR
This paper provides an alternative approach based on a generalized inferential model whose output is a data-dependent possibility distribution that emerges from the introduction of a nested random set designed to predict that unobserved auxiliary variable and is calibrated to achieve certain frequentist guarantees.
Statistical Inference as Severe Testing
TLDR
This book pulls back the cover on disagreements between experts charged with restoring integrity to science, and denies two pervasive views of the role of probability in inference: to assign degrees of belief, and to control error rates in a long run.
Incorporating Expert Opinion in an Inferential Model while Maintaining Validity
TLDR
Insight is provided on how to incorporatepartial priors “as they are”, guided by desired properties, such as that correct partial priors should result in more efficient inferences and, most importantly, that the inferences are always calibrated, independent of the truthfulness of the partial prior.
Attitudes toward amalgamating evidence in statistics
TLDR
A general prospective on statistics is laid out, predominantly using probability generating models for both parameters and data, which can unify seemingly distinct statistical philosophies as well as provide some guidance to resolving the current replication crisis in science.
False confidence, non-additive beliefs, and valid statistical inference
...
...

References

SHOWING 1-10 OF 89 REFERENCES
Statistical Evidence: A Likelihood Paradigm
Although the likelihood paradigm has been around for some time, Royall's distinctive voice, combined with his contribution of several novel lines of argument, has given new impetus to a school of
What is a statistical model
This paper addresses two closely related questions, What is a statistical model? and What is a parameter? The notions that a model must make sense, and that a parameter must have a well-defined
The Likelihood Principle
The Roles of Conditioning in Inference
TLDR
The use of sufficient and ancillary statis- tics in constructing conditional distributions for inference about a pa- rameter are reviewed, and the form of the approximations suggests methods for inference in more general families.
Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction
TLDR
Technical aspects are not the focus of Principles of Applied Statistics, so this also explains why it does not dwell intently on nonparametric models.
A simple procedure for the selection of significant effects
Summary.  Given a large number of test statistics, a small proportion of which represent departures from the relevant null hypothesis, a simple rule is given for choosing those statistics that are
Chapter 52 The Bootstrap
Marginalization Paradoxes in Bayesian and Structural Inference
We describe a range of routine statistical problems in which marginal posterior distributions derived from improper prior measures are found to have an unBayesian property-one that could not occur if
The case for objective Bayesian analysis
TLDR
It is suggested that the statistical community should accept formal objective Bayesian techniques with confidence, but should be more cautious about casual objectiveBayesian techniques.
Default priors for Bayesian and frequentist inference
Summary.  We investigate the choice of default priors for use with likelihood for Bayesian and frequentist inference. Such a prior is a density or relative density that weights an observed likelihood
...
...