Sample Size in Psychological Research over the Past 30 Years

  title={Sample Size in Psychological Research over the Past 30 Years},
  author={Jacob M. Marszalek and Carolyn E. Barber and Julie Dawn Kohlhart and Bert H. Cooper},
  journal={Perceptual and Motor Skills},
  pages={331 - 348}
The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes… 
The Precision of Effect Size Estimation From Published Psychological Research
Additional exploratory analyses revealed that CI widths varied across psychological research areas and thatCI widths were not discernably decreasing over time, and the theoretical implications are discussed along with ways of reducing the CI widthS and thus improving precision of effect size estimation.
An Introduction to Registered Replication Reports at Perspectives on Psychological Science
This issue of Perspectives on Psychological Science includes the first example of a new type of journal article, one designed to provide a more definitive measure of the size and reliability of important effects: the Registered Replication Report (RRR; see Simons & Holcombe, 2014).
Researchers’ Intuitions About Power in Psychological Research
Survey of published research psychologists found large discrepancies between their reports of their preferred amount of power and the actual power of their studies, and recommended that researchers conduct and report formal power analyses for their studies.
UvA-DARE ( Digital Academic Repository ) Researchers ’ Intuitions About Power in Psychological Research
Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to
How to Detect Publication Bias in Psychological Research
Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size
Power problems: n > 138
The sample sizes of well powered studies are explicated, showing that the combination of small effect sizes and small sample sizes in studies investigating the bilingual advantages in EF results in many studies being underpowered.
Continuously Cumulating Meta-Analysis and Replicability
This work presents a nontechnical introduction to the CCMA framework, and explains how it can be used to address aspects of replicability or more generally to assess quantitative evidence from numerous studies, and presents some examples and simulation results using the approach that show how the combination of evidence can yield improved results over the consideration of single studies.
Correcting for Bias in Psychology: A Comparison of Meta-Analytic Methods
Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to
A Historically Based Review of Empirical Work on Color and Psychological Functioning: Content, Methods, and Recommendations for Future Research
  • A. Elliot
  • Psychology
    Review of General Psychology
  • 2018
Empirical work on color and psychological functioning has a long history, dating back to the 19th century. This early research focused on five different areas: Arousal, physical strength, preference,
The Rules of the Game Called Psychological Science
This paper considers 13 meta-analyses covering 281 primary studies in various fields of psychology and finds indications of biases and/or an excess of significant results in seven, highlighting the need for sufficiently powerful replications and changes in journal policies.


Sample Size in Psychological Research
This study was conducted to provide information on the typical sample size employed in psychological research, as it is reported in selected American Psychological Association journals. All the
Statistical power of psychological research: what have we gained in 20 years?
  • J. Rossi
  • Psychology, Medicine
    Journal of consulting and clinical psychology
  • 1990
The implications of these results concerning the proliferation of Type I errors in the published literature, the failure of replication studies, and the interpretation of null (negative) results are emphasized.
Sample size in four areas of psychological research.
  • C. Holmes
  • Psychology, Medicine
    Transactions of the Kansas Academy of Science. Kansas Academy of Science
  • 1983
The current article presents data journal by journal in order to examine sample sizes in four areas of psychological research: abnormal, applied, developmental, and experimental psychology.
Do Studies of Statistical Power Have an Effect on the Power of Studies?
The long-term impact of studies of statistical power is investigated using J. Cohen's (1962) pioneering work as an example. We argue that the impact is nil; the power of studies in the same journal
The persistence of underpowered studies in psychological research: causes, consequences, and remedies.
  • S. Maxwell
  • Psychology, Medicine
    Psychological methods
  • 2004
Underpowered studies persist in the psychological literature and the effects on efforts to create a cumulative science are examined and the "curse of multiplicities" plays a central role.
Statistical power of articles published in three health psychology-related journals.
  • J. Maddock, J. Rossi
  • Medicine, Psychology
    Health psychology : official journal of the Division of Health Psychology, American Psychological Association
  • 2001
Power was calculated for 8,266 statistical tests in 187 journal articles published in the 1997 volumes of Health Psychology, Addictive Behaviors, and the Journal of Studies on Alcohol, giving a good estimation for the field of health psychology.
Statistical Methods in Psychology Journals: Guidelines and Explanations
In the light of continuing debate over the applications of significance testing in psychology journals and following the publication of Cohen's (1994) article, the Board of Scientific Affairs (BSA)
Reporting standards for research in psychology: why do we need them? What might they be?
The resulting recommendations contain standards for all journal articles, and more specific standards for reports of studies with experimental manipulations or evaluations of interventions using research designs involving random or nonrandom assignment.
Research Methods in Psychology
Comprehensive coverage of computer usage throughout the book illustrates the importance of computers and their use at every stage of the process from generation, collection, and quantitative analysis to gathering research information.
The Fifth edition of the Apa Publication Manual: Why its Statistics Recommendations are so Controversial
The fifth edition of the Publication Manual of the American Psychological Association (APA) draws on recommendations for improving statistical practices made by the APA Task Force on Statistical