The natural selection of bad science

@article{Smaldino2016TheNS,
  title={The natural selection of bad science},
  author={Paul E. Smaldino and Richard Mcelreath},
  journal={Royal Society Open Science},
  year={2016},
  volume={3}
}
Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principal factor for career… 

Figures and Tables from this paper

What are the reasons why bad science is being naturally selected

This brief note is based on an article recently published by Paul E. Smaldino and Richard McElreath (in Source) in the Royal Society Open Science journal. In this publication, Smaldino and McElreath

Open science and modified funding lotteries can impede the natural selection of bad science

TLDR
Modified lotteries, which allocate funding randomly among proposals that pass a threshold for methodological rigour, effectively reduce the rate of false discoveries, particularly when paired with open science improvements that increase the publication of negative results and improve the quality of peer review.

Persistence of false paradigms in low-power sciences

TLDR
It is found that when a science lacks evidence to discriminate between theories, or when tenure decisions do not rely on available evidence, true theories may not be adopted, and only an increase in power can ignite convergence to the true paradigm.

The natural selection of good science.

TLDR
This work model the cultural evolution of research practices when laboratories are allowed to expend effort on theory, enabling them, at a cost, to identify hypotheses that are more likely to be true, before empirical testing.

Should We Strive to Make Science Bias-Free? A Philosophical Assessment of the Reproducibility Crisis

  • R. Hudson
  • Education
    Journal for general philosophy of science = Zeitschrift fur allgemeine Wissenschaftstheorie
  • 2021
TLDR
It is argued that if the authors advocate the value-ladenness of science the result would be a deepening of the reproducibility crisis, and that for the majority of scientists the crisis is due, at least in part, to a form of publication bias.

The Natural Selection of Conservative Science

Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions

TLDR
An optimality model is developed that predicts the most rational research strategy, in terms of the proportion of research effort spent on seeking novel results rather than on confirmatory studies, and the amount ofResearch effort per exploratory study.

(2016). Current incentives for scientists lead to underpowered studies with erroneous conclusions. PLoS Biology (11),

We can regard the wider incentive structures that operate across science, such as the prior-ity given to novel findings, as an ecosystem within which scientists strive to maximise their fitness

Why and how we should join the shift from significance testing to estimation

TLDR
It is concluded that studies in ecology and evolutionary biology are mostly exploratory and descriptive, and should shift from claiming to ‘test’ specific hypotheses statistically to describing and discussing many hypotheses (possible true effect sizes) that are most compatible with the authors' data, given their statistical model.

The World of Research Has Gone Berserk: Modeling the Consequences of Requiring “Greater Statistical Stringency” for Scientific Publication

TLDR
A novel optimality model is developed that predicts a researcher’s most rational use of resources in terms of the number of studies to undertake, the statistical power to devote to each study, and the desirable prestudy odds to pursue that allows one to estimate the reliability of published research by considering a distribution of preferred research strategies.
...

References

SHOWING 1-10 OF 120 REFERENCES

Replication, Communication, and the Population Dynamics of Scientific Discovery

TLDR
A mathematical model of scientific discovery that combines hypothesis formation, replication, publication bias, and variation in research quality is developed and it is found that communication of negative replications may aid true discovery even when attempts to replicate have diminished power.

Scientific Utopia: II. Restructuring incentives and practices to promote truth over publishability

An academic scientist's professional success depends on publishing. Publishing norms emphasize novel, positive results. As such, disciplinary incentives encourage design, analysis, and reporting

Why Science Is Not Necessarily Self-Correcting

  • J. Ioannidis
  • Psychology
    Perspectives on psychological science : a journal of the Association for Psychological Science
  • 2012
TLDR
A number of impediments to self-correction that have been empirically studied in psychological science are cataloged and some proposed solutions to promote sound replication practices enhancing the credibility of scientific results are discussed.

Is the Replicability Crisis Overblown? Three Arguments Examined

  • H. PashlerC. Harris
  • Psychology
    Perspectives on psychological science : a journal of the Association for Psychological Science
  • 2012
TLDR
It is argued that there are no plausible concrete scenarios to back up such forecasts and that what is needed is not patience, but rather systematic reforms in scientific practice.

Clean Data: Statistical Artefacts Wash Out Replication Efforts

Johnson, Cheung, and Donnellan (2014a) reported a failure to replicate Schnall, Benton, and Harvey (2008)’s effect of cleanliness on moral judgment. However, inspection of the replication data shows

Publication bias in the social sciences: Unlocking the file drawer

TLDR
Fully half of peer-reviewed and implemented social science experiments are not published, providing direct evidence of publication bias and identifying the stage of research production at which publication bias occurs: Authors do not write up and submit null findings.

Article Commentary: On the Persistence of Low Power in Psychological Science

TLDR
Surveyed studies published recently in a high-ranking psychology journal and contacted authors to establish the rationale used for deciding sample size, finding that approximately one third held beliefs that would serve, on average, to reduce statistical power.

Do Studies of Statistical Power Have an Effect on the Power of Studies?

The long-term impact of studies of statistical power is investigated using J. Cohen's (1962) pioneering work as an example. We argue that the impact is nil; the power of studies in the same journal

Why psychologists must change the way they analyze their data: the case of psi: comment on Bem (2011).

TLDR
It is concluded that Bem's p values do not indicate evidence in favor of precognition; instead, they indicate that experimental psychologists need to change the way they conduct their experiments and analyze their data.

Theory building through replication response to commentaries on the "Many labs" replication project

While direct replications such as the “Many Labs” project are extremely valuable in testing the reliability of published findings across laboratories, they reflect the common reliance in psychology
...