Too good to be true: Publication bias in two prominent studies from experimental psychology

@article{Francis2012TooGT,
  title={Too good to be true: Publication bias in two prominent studies from experimental psychology},
  author={Gregory Francis},
  journal={Psychonomic Bulletin \& Review},
  year={2012},
  volume={19},
  pages={151-156}
}
  • G. Francis
  • Published 15 February 2012
  • Psychology, Biology
  • Psychonomic Bulletin & Review
Empirical replication has long been considered the final arbiter of phenomena in science, but replication is undermined when there is evidence for publication bias. Evidence for publication bias in a set of experiments can be found when the observed number of rejections of the null hypothesis exceeds the expected number of rejections. Application of this test reveals evidence of publication bias in two prominent investigations from experimental psychology that have purported to reveal evidence… 

Publication bias and the failure of replication in experimental psychology

  • G. Francis
  • Psychology
    Psychonomic bulletin & review
  • 2012
TLDR
This article shows how an investigation of the effect sizes from reported experiments can test for publication bias by looking for too much successful replication, and demonstrates that using Bayesian methods of data analysis can reduce (and in some cases, eliminate) the occurrence of publication bias.

Replication, statistical consistency, and publication bias.

A Bayesian approach to mitigation of publication bias

TLDR
A Bayesian model averaging approach is demonstrated that takes into account the possibility of publication bias and allows for a better estimate of true underlying effect size, leading to a more conservative interpretation of published studies as well as meta-analyses.

The Crisis of Confidence in Research Findings in Psychology: Is Lack of Replication the Real Problem? Or Is It Something Else?

There have been frequent expressions of concern over the supposed failure of researchers to conduct replication studies. But the large number of meta-analyses in our literatures shows that

The Psychology of Replication and Replication in Psychology

  • G. Francis
  • Psychology
    Perspectives on psychological science : a journal of the Association for Psychological Science
  • 2012
TLDR
The implications of this observation are described and how to test for too much successful replication by using a set of experiments from a recent research paper are demonstrated.

Analysis of Galak and Meyvis ’ s ( 2011 ) Experiments

Like other scientists, psychologists believe experimental replication to be the final arbiter for determining the validity of an empirical finding. Reports in psychology journals often attempt to

Excess Success for Psychology Articles in the Journal Science

TLDR
A systematic analysis of the relationship between empirical data and theoretical conclusions for a set of experimental psychology articles published in the journal Science between 2005–2012 suggests a systematic pattern of excess success among psychology articles in theJournal Science.

The frequency of excess success for articles in Psychological Science

  • G. Francis
  • Psychology
    Psychonomic bulletin & review
  • 2014
TLDR
An objective test for excess success is applied to a large set of articles published in the journal Psychological Science between 2009 and 2012, finding that problems appeared for 82 % (36 out of 44) of the articles that had four or more experiments and could be analyzed.

The Replication Paradox: Combining Studies can Decrease Accuracy of Effect Size Estimates

Replication is often viewed as the demarcation between science and nonscience. However, contrary to the commonly held view, we show that in the current (selective) publication system replications may

Replication initiative: beware misinterpretation.

TLDR
Two new initiatives are exploring the reproducibility of findings within psychology, and proponents of these initiatives should be careful, because it is easy to misinterpret replication successes and failures in a field that uses statistics.
...

References

SHOWING 1-10 OF 36 REFERENCES

Publication decisions revisited: the effect of the outcome of statistical tests on the decision to p

TLDR
Evidence that published results of scientific investigations are not a representative sample of results of all scientific studies is presented and practice leading to publication bias have not changed over a period of 30 years is indicated.

Publication Decisions and their Possible Effects on Inferences Drawn from Tests of Significance—or Vice Versa

Abstract There is some evidence that in fields where statistical tests of significance are commonly used, research which yields nonsignificant results is not published. Such research being unknown to

The (mis)reporting of statistical results in psychology journals

TLDR
The authors' results indicate that around 18% of statistical results in the psychological literature are incorrectly reported, and that errors were often in line with researchers’ expectations.

Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results

TLDR
It is suggested that statistical results are particularly hard to verify when reanalysis is more likely to lead to contrasting conclusions, which highlights the importance of establishing mandatory data archiving policies.

A Bayes factor meta-analysis of Bem’s ESP claim

TLDR
A meta-analytic Bayes factor is developed that describes how researchers should update their prior beliefs about the odds of hypotheses in light of data across several experiments, and finds that the evidence that people can feel the future with neutral and erotic stimuli to be slight.

Operating characteristics of a rank correlation test for publication bias.

An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic

Must psychologists change the way they analyze their data?

TLDR
It is argued that Wagenmakers, Wetzels, Borsboom, and van der Maas have incorrectly selected an unrealistic prior distribution for their analysis and that a bayesian analysis using a more reasonable distribution yields strong evidence in favor of the psi hypothesis.

Why psychologists must change the way they analyze their data: the case of psi: comment on Bem (2011).

TLDR
It is concluded that Bem's p values do not indicate evidence in favor of precognition; instead, they indicate that experimental psychologists need to change the way they conduct their experiments and analyze their data.

A practical solution to the pervasive problems ofp values

TLDR
The BIC provides an approximation to a Bayesian hypothesis test, does not require the specification of priors, and can be easily calculated from SPSS output.