The ASA Statement on p-Values: Context, Process, and Purpose

  title={The ASA Statement on p-Values: Context, Process, and Purpose},
  author={Ron Wasserstein and Nicole A. Lazar},
  journal={The American Statistician},
  pages={129 - 133}
Cobb’s concern was a long-worrisome circularity in the sociology of science based on the use of bright lines such as p< 0.05: “We teach it because it’s what we do; we do it because it’s what we teach.” This concern was brought to the attention of the ASA Board. The ASA Board was also stimulated by highly visible discussions over the last few years. For example, ScienceNews (Siegfried 2010) wrote: “It’s science’s dirtiest secret: The ‘scientific method’ of testing hypotheses by statistical… 
Is Banning Significance Testing the Best Way to Improve Applied Social Science Research? – Questions on Gorard (2016)
Significance testing is widely used in social science research. It has long been criticised on statistical grounds and problems in the research practice. This paper is an applied researchers’
Editorial: Replication and Reliability in Behavior Science and Behavior Analysis: A Call for a Conversation
  • D. Hantula
  • Psychology
    Perspectives on behavior science
  • 2019
It has been over a decade since Ioannidis (2005) published a provocative indictment of medical research titled “WhyMost Published Research Findings Are False.”According to the PloS Medicine website,
Coup de Grâce for a Tough Old Bull: “Statistically Significant” Expires
ABSTRACT Many controversies in statistics are due primarily or solely to poor quality control in journals, bad statistical textbooks, bad teaching, unclear writing, and lack of knowledge of the
Bernoulli’s Fallacy.
Even members of our community who do not teach or practice statistics are likely aware that the last decade has seen a number of public and visible controversies in the field. The American
The evidence contained in the P-value is context dependent.
Relationships Between p-values and Pearson Correlation Coefficients, Type 1 Errors and Effect Size Errors, Under a True Null Hypothesis
The American Statistical Association (ASA) published a statement in 2016 in The American Statistician for “researchers, practitioners and science writers who are not primarily statisticians” on the
Damaging the Case for Improving Social Science Methodology through Misrepresentation: Re-Asserting Confidence in Hypothesis Testing as a Valid Scientific Process
This paper is a response to Gorard's article, ‘Damaging real lives through obstinacy: re-emphasising why significance testing is wrong’ in Sociological Research Online 21(1). For many years Gorard
Are p‐values under attack? Contribution to the discussion of ‘A critical evaluation of the current “p‐value controversy” ’
  • W. Piegorsch
  • Medicine
    Biometrical journal. Biometrische Zeitschrift
  • 2017
The take-away message from the exposition is that while some high-visibility sources have called into question use of p-values in modern, data-rich, scientific discourse, their complaints may be overblown: the p-value is as indispensable (Prof. Wellek’s term) as ever in contemporary medical applications and in associated areas such as regulatory affairs.
In Defense of P Values
  • Psychology
  • 2016
Academics love to hate P values. Recently, dozens of commentaries in prestigious academic journals and well-thought-out position papers in academic blogs have criticized the use or misuse of P values


Curbing type I and type II errors
  • K. Rothman
  • Education
    European Journal of Epidemiology
  • 2010
The statistical education of scientists emphasizes a flawed approach to data analysis that should have been discarded long ago, and the grip of statistical significance testing on the biomedical sciences to tyranny, as did Loftus in the social sciences two decades ago.
Scientific method: Statistical errors
It turned out that the problem was not in the data or in Motyl's analyses, it lay in the surprisingly slippery nature of the P value, which is neither as reliable nor as objective as most scientists assume.
The cult of statistical significance: how the standard error costs us jobs, justice, and lives
Cartwright, N. (2004), ‘Causation: One Word, Many Things’, Philosophy of Science, 71, 805–819. Hitchcock, C. (2003), ‘Of Humean Bondage’, British Journal for the Philosophy of Science, 54, 1–25.
The fallacy of the null-hypothesis significance test.
To the experimental scientist, statistical inference is a research instrument, a processing device by which unwieldy masses of raw data may be refined into a product more suitable for assimilation into the corpus of science, and in this lies both strength and weakness.
What is Bayesian statistics and why everything else is wrong
We use a single example to explain (1), the Likelihood Principle, (2) Bayesian statistics, and (3) why classical statistics cannot be used to compare hypotheses. 1. The Slater School The example and
P Values: What They are and What They are Not
Abstract P values (or significance probabilities) have been used in place of hypothesis tests as a means of giving more information about the relationship between the data and the hypothesis than
The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant
It is common to summarize statistical comparisons by declarations of statistical significance or nonsignificance. Here we discuss one problem with such declarations, namely that changes in
Sifting the evidence—what's wrong with significance tests?
The high volume and often contradictory nature5 of medical research findings, however, is not only because of publication bias, but also because of the widespread misunderstanding of the nature of statistical significance.
Teaching hypothesis tests – time for significant change?
Suggested guidelines for the teaching of statistical inference to medical students are presented, and possible future developments are discussed.
A dirty dozen: twelve p-value misconceptions.
This commentary reviews a dozen of common misinterpretations of the P value and contrasts it with its Bayesian counterpart, the Bayes' factor, which has virtually all of the desirable properties of an evidential measure that the Pvalue lacks, most notably interpretability.