• Corpus ID: 246035447

A Synthetic Prediction Market for Estimating Confidence in Published Work

  title={A Synthetic Prediction Market for Estimating Confidence in Published Work},
  author={Sarah Michele Rajtmajer and Christopher Griffin and Jian Wu and Robert Fraleigh and Laxmaan Balaji and Anna Cinzia Squicciarini and Anthony M. Kwasnica and David M. Pennock and Michael Mclaughlin and Timothy J. Fritton and Nishanth Nakshatri and Arjun Manoj Menon and Sai Ajay Modukuri and Rajal Nivargi and Xin Wei and C. Lee Giles},
Sarah Rajtmajer,1 Christopher Griffin,1 Jian Wu,2 Robert Fraleigh,1 Laxmaan Balaji,1 Anna Squicciarini,1 Anthony Kwasnica,1 David Pennock,3 Michael McLaughlin,1 Timothy Fritton,1 Nishanth Nakshatri,1 Arjun Menon,1 Sai Ajay Modukuri,1 Rajal Nivargi,1 Xin Wei,2 C. Lee Giles1 1The Pennsylvania State University 2Old Dominion University 3Rutgers University {smr48,cxg286,rdf5090,lpb5347,acs20,amk17,mvm7085,tjf115,nzn5185,amm8987,svm6277,rfn5089,clg20}@psu.edu {j1wu,xwei001}@odu.edu david.pennock… 

Figures from this paper


Transparency and reproducibility in artificial intelligence.
Benjamin Haibe-Kains1,2,3,4,5 ✉, George Alexandru Adam, Ahmed Hosny, Farnoosh Khodakarami, Massive Analysis Quality Control (MAQC) Society Board of Directors*, Levi Waldron, Bo Wang, Chris McIntosh,
Predicting the Reproducibility of Social and Behavioral Science Papers Using Supervised Learning Models
A framework that extracts five types of features from scholarly work that can be used to support assessments of reproducibility of published research claims is proposed, and a subset of 9 top features are identified that play relatively more important roles in predicting the reproduCibility of SBS papers in the authors' corpus.
Probabilistic forecasting of replication studies
The results suggest that many of the estimates from the original studies were inflated, possibly caused by publication bias or questionable research practices, and also that some degree of heterogeneity between original and replication effects should be expected.
Estimating the deep replicability of scientific findings using human and artificial intelligence
An artificial intelligence model is trained to estimate a paper’s replicability using ground truth data on studies that had passed or failed manual replication tests, and its generalizability is tested on an extensive set of out-of-sample studies.
Using prediction markets to estimate the reproducibility of scientific research
It is argued that prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even beused to determine which studies to replicate to optimally allocate limited resources into replications.
The Extent of Price Misalignment in Prediction Markets
This work reveals persistent arbitrage opportunities for risk-neutral investors between identical contracts on different exchanges and details how to improve prediction markets by moving the burden of finding and fixing logical contradictions into the exchange and providing flexible trading interfaces, both of which free traders to focus on providing meaningful information in the form they find most natural.
Design and Analysis of a Synthetic Prediction Market using Dynamic Convex Sets
Predicting replicability—Analysis of survey and prediction market data from large-scale forecasting projects
Analysis of data from four studies which sought to forecast the outcomes of replication projects in the social and behavioural sciences, using human experts who participated in prediction markets and answered surveys finds there is information within the scientific community about the replicability of scientific findings, and both surveys and prediction markets can be used to elicit and aggregate this information.
Predicting replication outcomes in the Many Labs 2 study
Surrogate Scoring Rules
It is shown that, with a single bit of information about the prior distribution of the random variables, SSR in a multi-task setting recover SPSR in expectation, as if having access to the ground truth.