Z-Curve.2.0: Estimating Replication Rates and Discovery Rates

@inproceedings{Barto2020ZCurve20ER,
  title={Z-Curve.2.0: Estimating Replication Rates and Discovery Rates},
  author={Franti{\vs}ek Barto{\vs} and Ulrich Schimmack},
  year={2020}
}
Publication bias, the fact that published studies are not necessarily representative of all conducted studies, poses a significant threat to the credibility of scientific literature. To mitigate the problem, we introduce z-curve 2.0 as a method that estimates two interpretable measures for the credibility of scientific literature based on test-statistics of published studies the expected replication rate (ERR) and the expected discovery rate (EDR). Z-curve 2.0 extends the work by Brunner and… 
Responsible product design to mitigate excessive gambling: A scoping review and z-curve analysis of replicability
TLDR
The results of z-curve provide some evidence of publication bias, and suggest that the replicability of the responsible product design literature is uncertain but could be low, and greater transparency and precision are paramount to improving the evidence base forresponsible product design to mitigate gambling-related harm.
Gambling researchers’ use and views of open science principles and practices: a brief report
ABSTRACT Scientists across disciplines have begun to implement ‘open science’ principles and practices, which are designed to enhance the quality, transparency, and replicability of scientific
Assessing the evidence of perspective taking on stereotyping and negative evaluations: A p-curve analysis
Perspective taking is conceptualized as the ability to consider or adopt the perspective of another individual who is perceived to be in need; it has shown mixed results in stereotype reduction and
Auto-Detecting Perpetual Outliers Using Efficient Modified Fuzzy Clustering Approach
TLDR
An augmented study of modified robust fuzzy clustering approach to detect the unusual outliers using membership function to tolerate the uncertainty shows effectiveness of detecting perpetual outliers.
Are Emotion-Expressing Messages More Shared on Social Media? A Meta-Analytic Review
Given that social media has brought significant change to the communication landscape, researchers have explored factors that can influence audiences’ information-sharing on social media such as a
A Review of the Effects of Valenced Odors on Face Perception and Evaluation
TLDR
The results indicate that odors may influence facial evaluations and classifications in several ways, and that exposure to a valenced odor facilitates the processing of a similarly valenced facial expression.

References

SHOWING 1-10 OF 50 REFERENCES
How to Detect Publication Bias in Psychological Research
Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size
Estimating Population Mean Power Under Conditions of Heterogeneity and Selection for Significance
In scientific fields that use significance tests, statistical power is important for successful replications of significant results because it is the long-run success rate in a series of exact
Publication decisions revisited: the effect of the outcome of statistical tests on the decision to p
TLDR
Evidence that published results of scientific investigations are not a representative sample of results of all scientific studies is presented and practice leading to publication bias have not changed over a period of 30 years is indicated.
Why Most Published Research Findings Are False
TLDR
Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true.
Modeling publication selection effects in meta-analysis
Publication selection effects arise in meta-analysis when the effect magnitude estimates are observed in (available from) only a subset of the studies that were actually conducted and the probability
Robust Bayesian Meta-Analysis: Addressing Publication Bias with Model-Averaging
TLDR
It is demonstrated that RoBMA finds evidence for the absence of publication bias in Registered Replication Reports and reliably avoids false positives, and is relatively robust to model misspecification and simulations show that it outperforms existing methods.
P-Curve: A Key to the File Drawer
TLDR
By telling us whether the authors can rule out selective reporting as the sole explanation for a set of findings, p-curve offers a solution to the age-old inferential problems caused by file-drawers of failed studies and analyses.
Estimating Effect Size Under Publication Bias: Small Sample Properties and Robustness of a Random Effects Selection Model
When there is publication bias, studies yielding large p values, and hence small effect estimates, are less likely to be published, which leads to biased estimates of effects in meta-analysis. We
An exploratory test for an excess of significant findings
TLDR
A test to explore biases stemming from the pursuit of nominal statistical significance was developed and demonstrated a clear or possible excess of significant studies in 6 of 8 large meta-analyses and in the wide domain of neuroleptic treatments.
The ironic effect of significant results on the credibility of multiple-study articles.
TLDR
One major recommendation is to pay more attention to the power of studies to produce positive results without the help of questionable research practices and to request that authors justify sample sizes with a priori predictions of effect sizes.
...
...