• Corpus ID: 227239273

A Large Scale Randomized Controlled Trial on Herding in Peer-Review Discussions

@article{Stelmakh2020ALS,
  title={A Large Scale Randomized Controlled Trial on Herding in Peer-Review Discussions},
  author={Ivan Stelmakh and Charvi Rastogi and Nihar B. Shah and Aarti Singh and Hal Daum'e},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.15083}
}
Peer review is the backbone of academia and humans constitute a cornerstone of this process, being responsible for reviewing papers and making the final acceptance/rejection decisions. Given that human decision making is known to be susceptible to various cognitive biases, it is important to understand which (if any) biases are present in the peer-review process and design the pipeline such that the impact of these biases is minimized. In this work, we focus on the dynamics of between-reviewers… 

Figures and Tables from this paper

Cite-seeing and Reviewing: A Study on Citation Bias in Peer Review
Citations play an important role in researchers’ careers as a key factor in evaluation of scientific impact. Many anecdotes advice authors to exploit this fact and cite prospective reviewers to try
Towards Fair, Equitable, and Efficient Peer Review
TLDR
This work designs a Human-AI collaboration pipeline in peer review to mitigate issues and ensure that science progresses in a fair, equitable, and efficient manner, and designs practical algorithms that help conference organizers to promote fairness.
Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and Conference Experiment Design
TLDR
It is proved that when the set of papers requiring additional review is unknown, a simplified variant of this problem is NP-hard, and it is empirically shown that across several datasets pertaining to real conference data, dividing reviewers between phases/conditions uniformly at random allows an assignment that is nearly as good as the oracle optimal assignment.
To ArXiv or not to ArXiv: A Study Quantifying Pros and Cons of Posting Preprints Online
Double-blind conferences have engaged in debates over whether to allow authors to post their papers online on arXiv or elsewhere during the review process. Independently, some authors of research
Yes-Yes-Yes: Donation-based Peer Reviewing Data Collection for ACL Rolling Review and Beyond
Peer review is the primary gatekeeper of scientific merit and quality, yet it is prone to bias and suffers from low efficiency. This demands cross-disciplinary scrutiny of the processes that underlie
Yes-Yes-Yes: Proactive Data Collection for ACL Rolling Review and Beyond
The shift towards publicly available text sources has enabled language processing at un-precedented scale, yet leaves under-serviced the domains where public and openly licensed data is scarce.
KDD 2021 Tutorial on Systemic Challenges and Solutions on Bias and Unfairness in Peer Review
TLDR
In this tutorial, a number of key challenges in peer review are discussed, several directions of research on this topic are outlined, and important open problems that are likely to be exciting to the community are highlighted.
Loss Functions, Axioms, and Peer Review
TLDR
This paper presents a framework inspired by empirical risk minimization (ERM) for learning the community's aggregate mapping, and describes p=q=1 as the only choice of these hyperparameters that satisfies three natural axiomatic properties.
WSDM 2021 Tutorial on Systematic Challenges and Computational Solutions on Bias and Unfairness in Peer Review
TLDR
This tutorial will discuss a number of systemic challenges in peer review such as biases, subjectivity, miscalibration, dishonest behavior, and noise and present computational techniques designed to address these challenges.

References

SHOWING 1-10 OF 42 REFERENCES
On Testing for Biases in Peer Review
TLDR
A general framework for testing for biases in (single vs. double blind) peer review is presented, and a hypothesis test with guaranteed control over false alarm probability and non-trivial power is presented.
Catch Me if I Can: Detecting Strategic Behaviour in Peer Assessment
TLDR
This paper designs a principled test for detecting strategic behaviour, designs an experiment that elicits strategic behaviour from subjects and releases a dataset of patterns of strategic behaviour that may be of independent interest, and proves that the test has strong false alarm guarantees.
Uncovering Latent Biases in Text: Method and Application to Peer Review
TLDR
A novel framework to quantify bias in text caused by the visibility of subgroup membership indicators and shows evidence of biases in the review ratings that serves as "ground truth", and shows that the proposed framework accurately detects these biases from the review text without having access to the review Ratings.
Reviewer bias in single- versus double-blind peer review
TLDR
This study considers full-length submissions to the highly selective 2017 Web Search and Data Mining conference and shows that single-blind reviewing confers a significant advantage to papers with famous authors and authors from high-prestige institutions.
PeerReview4All: Fair and Accurate Reviewer Assignment in Peer Review
TLDR
A fairness objective is to maximize the review quality of the most disadvantaged paper, in contrast to the commonly used objective of maximizing the total quality over all papers, and an assignment algorithm based on an incremental max-flow procedure is designed that is near-optimally fair.
How social influence can undermine the wisdom of crowd effect
TLDR
This work demonstrates by experimental evidence that even mild social influence can undermine the wisdom of crowd effect in simple estimation tasks.
A SUPER* Algorithm to Optimize Paper Bidding in Peer Review
TLDR
An algorithm called SUPER*, inspired by the A* algorithm, is presented, which considerably outperforms baselines deployed in existing systems, consistently reducing the number of papers with fewer than requisite bids by 50-75% or more, and is also robust to various real world complexities.
A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications
TLDR
The first public dataset of scientific peer reviews available for research purposes (PeerRead v1) is presented and it is shown that simple models can predict whether a paper is accepted with up to 21% error reduction compared to the majority baseline.
A Market-Inspired Bidding Scheme for Peer Review Paper Assignment
TLDR
It is shown that by assigning ‘budgets’ to reviewers and a ‘price’ for every paper that is (roughly) proportional to its demand, the best response of a reviewer is to bid sincerely, and match the budget even when it is not enforced.
Representativeness revisited: Attribute substitution in intuitive judgment.
The program of research now known as the heuristics and biases approach began with a survey of 84 participants at the 1969 meetings of the Mathematical Psychology Society and the American
...
1
2
3
4
5
...