A Statistical Inference Course Based on p-Values

@article{Martin2016ASI,
  title={A Statistical Inference Course Based on p-Values},
  author={Ryan Martin},
  journal={The American Statistician},
  year={2016},
  volume={71},
  pages={128 - 136}
}
  • Ryan Martin
  • Published 7 June 2016
  • Mathematics
  • The American Statistician
ABSTRACT Introductory statistical inference texts and courses treat the point estimation, hypothesis testing, and interval estimation problems separately, with primary emphasis on large-sample approximations. Here, I present an alternative approach to teaching this course, built around p-values, emphasizing provably valid inference for all sample sizes. Details about computation and marginalization are also provided, with several illustrative examples, along with a course outline. Supplementary… 
Generalized inferential models for meta-analyses based on few studies.
TLDR
A new approach, based on the generalized inferential model framework, whose success lays in marginalizing out the between-study variance, so that an accurate estimate is not essential, and which outperforms existing methods across a wide range of scenarios.
Prior-Free Probabilistic Inference for Econometricians
TLDR
This paper is intended to inform econometricians that an alternative inferential model (IM) approach exists that can achieve probabilistic inference without a prior and while enjoying certain calibration properties essential for reproducibility, etc.
Interval estimation, point estimation, and null hypothesis significance testing calibrated by an estimated posterior probability of the null hypothesis
Much of the blame for failed attempts to replicate reports of scientific findings has been placed on ubiquitous and persistent misinterpretations of the p value. An increasingly popular solution is...
False confidence, non-additive beliefs, and valid statistical inference
An imprecise-probabilistic characterization of frequentist statistical inference
Between the two dominant schools of thought in statistics, namely, Bayesian and classical/frequentist, a main difference is that the former is grounded in the mathematically rigorous theory of
General and feasible tests with multiply-imputed datasets
TLDR
A general MI procedure is proposed, called stacked multiple imputation (SMI), for performing Wald’s tests, likelihood ratio tests and Rao's score tests by a unified algorithm that requires neither EOMI nor an infinite number of imputations.
Response to the comment Confidence in confidence distributions!
Thanks to Drs Céline Cunen, Nils Lid Hjort and Tore Schweder for their interest in our recent contribution [1] concerning the probability dilution phenomenon in satellite conjunction analysis and,
A mathematical characterization of confidence as valid belief
Confidence is a fundamental concept in statistics, but there is a tendency to misinterpret it as probability. In this paper, I argue that an intuitively and mathematically more appropriate
Correcting for attenuation due to measurement error
I present a frequentist method for quantifying uncertainty when correcting correlations for attenuation due to measurement error. The method is conservative but has far better coverage properties
Valid Model-Free Prediction of Future Insurance Claims
Bias resulting from model misspecification is a concern when predicting insurance claims. Indeed, this bias puts the insurer at risk of making invalid or unreliable predictions. A method that could
...
1
2
...

References

SHOWING 1-10 OF 35 REFERENCES
Statistical Inference: Likelihood to Significance
Abstract The concepts of likelihood and significance were defined and initially developed by R. A. Fisher, but followed almost separate and distinct routes. We suggest that a central function of
P Values: What They are and What They are Not
Abstract P values (or significance probabilities) have been used in place of hypothesis tests as a means of giving more information about the relationship between the data and the hypothesis than
Confidence Curves: An Omnibus Technique for Estimation and Testing Statistical Hypotheses
Abstract A standard practice of physical scientists is to report estimates (“measurements”) accompanied by their standard errors (or alternatively “average errors” or “probable errors”). Such reports
Confidence, Likelihood, Probability: Statistical Inference with Confidence Distributions
This lively book lays out a methodology of confidence distributions and puts them through their paces. Among other merits, they lead to optimal combinations of confidence from different sources of
Plausibility Functions and Exact Frequentist Inference
In the frequentist program, inferential methods with exact control on error rates are a primary focus. The standard approach, however, is to rely on asymptotic approximations, which may not be
Applied Asymptotics: Case Studies in Small-Sample Statistics
TLDR
Uncertainty and approximation, regression with continuous responses, and likelihood approximations - some numerical techniques:.
Estimation in Parallel Randomized Experiments
Many studies comparing new treatments to standard treatments consist of parallel randomized experiments. In the example considered here, randomized experiments were conducted in eight schools to
Paradoxes and Improvements in Interval Estimation
Abstract Cases where confidence sets are empty or include every possible parameter value are an embarrassment to standard theory and difficult to explain to students. To alleviate this problem, and
Confidence and Likelihood *
Confidence intervals for a single parameter are spanned by quantiles of a confidence distribution, and one‐sided p‐values are cumulative confidences. Confidence distributions are thus a unifying
Inferential Models: Reasoning with Uncertainty
TLDR
Preliminaries Introduction Assumed background Scientific inference: An overview Prediction and inference Outline of the book Prior-Free Probabilistic Inference, and some further technical details.
...
1
2
3
4
...