The persistence of underpowered studies in psychological research: causes, consequences, and remedies.

@article{Maxwell2004ThePO,
  title={The persistence of underpowered studies in psychological research: causes, consequences, and remedies.},
  author={Scott E. Maxwell},
  journal={Psychological methods},
  year={2004},
  volume={9 2},
  pages={
          147-63
        }
}
  • S. Maxwell
  • Published 2004
  • Psychology, Medicine
  • Psychological methods
Underpowered studies persist in the psychological literature. This article examines reasons for their persistence and the effects on efforts to create a cumulative science. The "curse of multiplicities" plays a central role in the presentation. Most psychologists realize that testing multiple hypotheses in a single study affects the Type I error rate, but corresponding implications for power have largely been ignored. The presence of multiple hypothesis tests leads to 3 different… Expand
Welcoming Quality in Non-Significance and Replication Work, but Moving Beyond the p-Value
The self-correcting nature of psychological and educational science has been seriously questioned. Recent special issues of Perspectives on Psychological Science and Psychology of Aesthetics,Expand
The Crisis of Confidence in Research Findings in Psychology: Is Lack of Replication the Real Problem? Or Is It Something Else?
There have been frequent expressions of concern over the supposed failure of researchers to conduct replication studies. But the large number of meta-analyses in our literatures shows thatExpand
Can the behavioral sciences self-correct? A social epistemic study.
  • Felipe Romero
  • Sociology, Medicine
  • Studies in history and philosophy of science
  • 2016
TLDR
It is argued that methodological explanations of the "replicability crisis" in psychology are limited and an alternative explanation in terms of biases is proposed and suggested that scientific self-correction should be understood as an interaction effect between inference methods and social structures. Expand
On Attenuated Interactions, Measurement Error, and Statistical Power: Guidelines for Social and Personality Psychologists
TLDR
This investigation shows why even a programmatic series of six studies employing 2 × 2 designs, with samples exceeding N = 500, can be woefully underpowered to detect genuine effects. Expand
Replicability Crisis in Social Psychology: Looking at the Past to Find New Pathways for the Future
Over the last few years, psychology researchers have become increasingly preoccupied with the question of whether findings from psychological studies are generally replicable. The debates haveExpand
Stereotype Threat and Its Problems: Theory Misspecification in Research, Consequences, and Remedies
Despite the explosive growth in stereotype threat (ST) research over the decades, a substantive amount of variability in ST effects still cannot be explained by extant research. While some attributeExpand
On the scientific superiority of conceptual replications for scientific progress
Abstract There is considerable current debate about the need for replication in the science of social psychology. Most of the current discussion and approbation is centered on direct or exactExpand
What Should Researchers Expect When They Replicate Studies? A Statistical View of Replicability in Psychological Science
  • Prasad Patil, R. Peng, J. Leek
  • Psychology, Medicine
  • Perspectives on psychological science : a journal of the Association for Psychological Science
  • 2016
TLDR
The results of the Reproducibility Project: Psychology can be viewed as statistically consistent with what one might expect when performing a large-scale replication experiment. Expand
Does the conclusion follow from the evidence? Recommendations for improving research
Abstract Recent criticisms of social psychological research are considered in relation to an earlier crisis in social psychology. The current replication crisis is particularly severe because (1)Expand
The Impact of Complexity on Methods and Findings in Psychological Science
TLDR
The impact of complexity on research design, hypothesis testing, measurement, data analyses, reproducibility, and the communication of findings in psychological science is reviewed. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 76 REFERENCES
Statistical power of psychological research: what have we gained in 20 years?
  • J. Rossi
  • Psychology, Medicine
  • Journal of consulting and clinical psychology
  • 1990
TLDR
The implications of these results concerning the proliferation of Type I errors in the published literature, the failure of replication studies, and the interpretation of null (negative) results are emphasized. Expand
A power primer.
  • J. Cohen
  • Mathematics, Medicine
  • Psychological bulletin
  • 1992
TLDR
A convenient, although not comprehensive, presentation of required sample sizes is providedHere the sample sizes necessary for .80 power to detect effects at these levels are tabled for eight standard statistical tests. Expand
Statistical Significance Testing and Cumulative Knowledge in Psychology: Implications for Training of Researchers
Data analysis methods in psychology still emphasize statistical significance testing, despite numerous articles demonstrating its severe deficiencies. It is now possible to use meta-analysis to showExpand
The proof of the pudding: an illustration of the relative strengths of null hypothesis, meta-analysis, and Bayesian analysis.
TLDR
The authors illustrate the use of NHST along with 2 possible alternatives (meta-analysis as a primary data analysis strategy and Bayesian approaches) in a series of 3 studies to demonstrate that the approaches are not mutually exclusive but instead can be used to complement one another. Expand
Do Studies of Statistical Power Have an Effect on the Power of Studies?
The long-term impact of studies of statistical power is investigated using J. Cohen's (1962) pioneering work as an example. We argue that the impact is nil; the power of studies in the same journalExpand
The account taken of statistical power in research published in the British Journal of Psychology
Since approximately 1925, researchers in psychology have evaluated their hypotheses against the probability of making a Type I error. Attempts to persuade researchers to augment this information,Expand
THE PERCEPTIONS AND USAGE OF STATISTICAL POWER IN APPLIED PSYCHOLOGY AND MANAGEMENT RESEARCH
We first assess the current level of statistical power across articles in seven leading journals that represent a broad sample of applied psychology and management research. We next survey theExpand
The role of method in treatment effectiveness research: evidence from meta-analysis.
TLDR
A synthesis of 319 meta-analyses of psychological, behavioral, and educational treatment research was conducted to assess the influence of study method on observed effect sizes relative to that of substantive features of the interventions, highlighting the difficulty of detecting treatment outcomes. Expand
Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology.
Abstract Theories in “soft” areas of psychology lack the cumulative character of scientific knowledge. They tend neither to be refuted nor corroborated, but instead merely fade away as people loseExpand
Seven ways to increase power without increasing N.
TLDR
This chapter argues that the failure of many prevention studies to maintain adequate statistical power is due to an overemphasis on sample size (N) as the only, or even the best, way to increase statistical power. Expand
...
1
2
3
4
5
...