• Corpus ID: 248665946

A self-governing, self-regulating system for assessing scientific predictive power

@inproceedings{Rogers2022ASS,
  title={A self-governing, self-regulating system for assessing scientific predictive power},
  author={Ted Rogers},
  year={2022}
}
I propose a method for tracking and assessing scientific progress using a prediction consensus algorithm designed for the purpose. The protocol obviates the need for centralized referees to generate scientific questions, gather predictions, and assess the accuracy or success of those predictions. It relies instead on crowd wisdom and a system of checks and balances for all tasks. It is in-tended to take the form of a web-based, searchable database. I describe a prototype implementation that I… 

References

SHOWING 1-10 OF 56 REFERENCES
Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions
TLDR
An optimality model is developed that predicts the most rational research strategy, in terms of the proportion of research effort spent on seeking novel results rather than on confirmatory studies, and the amount ofResearch effort per exploratory study.
Six red flags for suspect work.
C. Glenn Begley explains how to recognize the preclinical papers in which the data won’t stand up. of a policy that promotes rapid, open access to observing data, following the protocols developed in
Superforecasting: The Art and Science of Prediction
A New York Times Bestseller An Economist Best Book of 2015 "The most important book on decision making since Daniel Kahneman's Thinking, Fast and Slow." Jason Zweig, The Wall Street Journal Everyone
Modelling science trustworthiness under publish or perish pressure
TLDR
It is suggested that trustworthiness of published science in a given field is influenced by false positive rate, and pressures for positive results, and decreasing available funding has negative consequences for resulting trustworthiness.
How scientists fool themselves – and how they can stop
TLDR
A statistician at Columbia University in New York City was chagrined to learn of an error in the data analysis, declaring that everything in the paper’s crucial section should be considered wrong until proved otherwise.
Reproducibility: A tragedy of errors
TLDR
A group of researchers working on obesity, nutrition and energetics read a research paper estimating how a change in fast-food consumption would affect children’s weight, and one of them noted that the analysis applied a mathematical model that overestimated effects by more than tenfold.
Loss of sustainability in scientific work
TLDR
This work captures both aspects, the forgetting and the tendency to cite already popular works, in a microscopic generative model for the dynamics of scientific citation networks, and finds that the probability of citing a specific paper declines with age as a power law with an exponent of α ∼ −1.4.
Metascience could rescue the ‘replication crisis’
TLDR
The discovery of the verbal-overshadowing effect was an obvious target for this replication project, and last year psychologists at 31 different laboratories across the world signed up to repeat the study and report the results, which confirmed the original finding.
Statistical Inference and the Replication Crisis
The replication crisis has prompted many to call for statistical reform within the psychological sciences. Here we examine issues within Frequentist statistics that may have led to the replication
Comment on "Reproducibility and Replication of Experimental Particle Physics Results"
  • A. Fowlie
  • Physics
    Harvard Data Science Review
  • 2021
I would like to thank Junk and Lyons [1] for beginning a discussion about replication in high-energy physics (HEP). Junk and Lyons ultimately argue that HEP learned its lessons the hard way through
...
...