Methodological reporting behavior, sample sizes, and statistical power in studies of event-related potentials: Barriers to reproducibility and replicability.

@article{Clayson2019MethodologicalRB,
  title={Methodological reporting behavior, sample sizes, and statistical power in studies of event-related potentials: Barriers to reproducibility and replicability.},
  author={Peter E. Clayson and Kaylie A. Carbine and Scott A. Baldwin and Michael J. Larson},
  journal={Psychophysiology},
  year={2019},
  pages={
          e13437
        }
}
Methodological reporting guidelines for studies of ERPs were updated in Psychophysiology in 2014. These guidelines facilitate the communication of key methodological parameters (e.g., preprocessing steps). Failing to report key parameters represents a barrier to replication efforts, and difficulty with replicability increases in the presence of small sample sizes and low statistical power. We assessed whether guidelines are followed and estimated the average sample size and power in recent… 
Comparing the effects of different methodological decisions on the error-related negativity and its association with behaviour and genders.
Moderators of the internal consistency of error-related negativity scores: A meta-analysis of internal consistency estimates.
TLDR
There was substantial heterogeneity in ERN score internal consistency, which was partially moderated by the type of paradigm, and the approach to scoring and estimating reliability, suggesting that contextual factors impact internal consistency at the individual study level.
Estimating the statistical power to detect set-size effects in contralateral delay activity.
TLDR
A robust analysis of the statistical power that is needed to achieve reliable and reproducible results with the contralateral delay activity is provided and researchers designing experiments to detect set-size differences in the CDA collect substantially more trials per subject is recommended.
Pooling resources to enhance rigour in psychophysiological research: Insights from open science approaches to meta-analysis.
  • Blair Saunders, M. Inzlicht
  • Biology
    International journal of psychophysiology : official journal of the International Organization of Psychophysiology
  • 2021
A registered report of error-related negativity and reward positivity as biomarkers of depression: P-Curving the evidence.
Making ERP research more transparent: Guidelines for preregistration.
Data quality and reliability metrics for event-related potentials (ERPs): The utility of subject-level reliability.
Using multilevel models for the analysis of event-related potentials.
...
...

References

SHOWING 1-10 OF 73 REFERENCES
Sample size calculations in human electrophysiology (EEG and ERP) studies: A systematic review and recommendations for increased rigor.
  • M. Larson, Kaylie A. Carbine
  • Psychology
    International journal of psychophysiology : official journal of the International Organization of Psychophysiology
  • 2017
Psychometric considerations in the measurement of event-related brain potentials: Guidelines for measurement and reporting.
  • Peter E. Clayson, G. A. Miller
  • Psychology
    International journal of psychophysiology : official journal of the International Organization of Psychophysiology
  • 2017
Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature
TLDR
In light of the findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience, and false report probability is likely to exceed 50% for the whole literature.
Correcting for Bias in Psychology: A Comparison of Meta-Analytic Methods
Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to
The Reporting of Observational Clinical Functional Magnetic Resonance Imaging Studies: A Systematic Review
TLDR
It is indicated that substantial improvement in the reporting of observational clinical fMRI studies is required and creation of a shortened-version of Poldrack's checklist that contains the most relevant items may be useful in this regard.
Comparing the error-related negativity across groups: The impact of error- and trial-number differences.
TLDR
It is demonstrated that, across participants, the number of errors correlates with the amplitude of the ERN independently of the numberof errors included in ERN quantification per participant, constituting a possible confound when such variance is unaccounted for.
Better living through transparency: Improving the reproducibility of fMRI results through comprehensive methods reporting
  • J. Carp
  • Medicine
    Cognitive, affective & behavioral neuroscience
  • 2013
TLDR
It is argued that comprehensive methods reporting is essential for reproducible research and three strategies for improving transparency and reproducibility in neuroimaging research are recommended: improving natural language descriptions of research protocols; sharing source code for data collection and analysis; and sharing formal, machine-readable representations of methods and results.
A registered report of error-related negativity and reward positivity as biomarkers of depression: P-Curving the evidence.
Psychometrics and the Neuroscience of Individual Differences: Internal Consistency Limits Between-Subjects Effects
TLDR
How variability in the internal consistency of neural measures limits between-subjects (i.e., individual differences) effects is demonstrated and internal consistency reliability should be routinely reported in all individual differences studies.
...
...