Science with no fiction: measuring the veracity of scientific reports by citation analysis

Abstract

The current crisis of veracity in biomedical research is enabled by the lack of publicly accessible information on whether the reported scientific claims are valid. One approach to solve this problem is to replicate previous studies by specialized reproducibility centers. However, this approach is costly or unaffordable and raises a number of yet to be resolved concerns that question its effectiveness and validity. We propose to use an approach that yields a simple numerical measure of veracity, the Rfactor, by summarizing the outcomes of already published studies that have attempted to test a claim. The R-factor of an investigator, a journal, or an institution would be the average of the R-factors of the claims they reported. We illustrate this approach using three studies recently tested by a replication initiative, compare the results, and discuss how using the R-factor can help improve the veracity of scientific research. The current crisis of veracity in biomedical research, and having less than a half of preclinical studies reproducible (Begley & Ellis, 2012; Prinz, Schlange, & Asadullah, 2011) is truly a crisis, has spilled from a discussion in scientific journals (Begley & Ellis, 2012; Casadevall & Fang, 2010; Collins & Tabak, 2014; Fang, Steen, & Casadevall, 2012; Freedman, Cockburn, & Simcoe, 2015; Ioannidis, 2005, 2017; Leek & Jager, 2017) into the pages of national newspapers (Angell, 2009; Carey, 2015; Glanz, 2017) and popular books with provocative titles (Harris, 2017). This development suggests that scientists might need to put their house in order before asking for more money to expand it. The approaches that have been tried or proposed are: calling on scientists to be better and “publish houses of brick, not mansions of straw” (Kaelin, 2017), perhaps under the scrutiny of video surveillance in the laboratory (Clark, 2017), requiring raw data and additional information when submitting an article (Editorial, 2017a) or a funding report (https://grants.nih.gov/reproducibility/index.htm), and establishing reproducibility initiatives that replicate prior studies to serve as a deterrent for future abuse of scientific rigor. One of these initiatives, Reproducibility Project: Cancer Biology, was organized following the report that only 6 out of 53 landmark cancer research studies could be verified (Begley & Ellis, 2012) and set to replicate 50 cancer research reports out of 290,444 published by the field between 2010 and 2012 (Errington et al., 2014). The reports on replicating the first seven studies have been published this year (Aird, Kandela, Mantis, & Reproducibility Project: Cancer, 2017; Horrigan et al., 2017; Horrigan & Reproducibility Project: Cancer, 2017; Kandela, Aird, & Reproducibility Project: Cancer, 2017; Mantis, Kandela, Aird, & Reproducibility Project: Cancer, 2017; Shan, Fung, Kosaka, Danet-Desnoyers, & Reproducibility Project: Cancer, 2017; Showalter et al., 2017). . CC-BY 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/172940 doi: bioRxiv preprint first posted online Aug. 9, 2017;

2 Figures and Tables

Cite this paper

@inproceedings{Grabitz2011ScienceWN, title={Science with no fiction: measuring the veracity of scientific reports by citation analysis}, author={Peter Grabitz and Yuri Lazebnik and Josh Nicholson and Sean Rife}, year={2011} }