On Strategyproof Conference Peer Review
@inproceedings{Xu2018OnSC, title={On Strategyproof Conference Peer Review}, author={Yichong Xu and H. Zhao and Xiaofei Shi and Nihar B. Shah}, booktitle={International Joint Conference on Artificial Intelligence}, year={2018} }
We consider peer review under a conference setting where there are conflicts between the reviewers and the submissions. Under such conflicts, reviewers can manipulate their reviews in a strategic manner to influence the final rankings of their own papers. Present-day peer-review systems are not designed to guard against such strategic behavior, beyond minimal (and insufficient) checks such as not assigning a paper to a conflicted reviewer. In this work, we address this problem through the lens…
34 Citations
The Price of Strategyproofing Peer Assessment
- EconomicsArXiv
- 2022
Strategic behavior is a fundamental problem in a variety of real-world applications that require some form of peer assessment, such as peer grading of assignments, grant proposal review, conference…
Strategyproofing Peer Assessment via Partitioning: The Price in Terms of Evaluators' Expertise
- Computer ScienceHCOMP
- 2022
This paper analyzes the price of strategyproofness: that is, the amount of compromise on the assigned evaluators' expertise required in order to get strategyProofness, and establishes several polynomial-time algorithms for strategyproof assignment along with assignment-quality guarantees.
Mitigating Manipulation in Peer Review via Randomized Reviewer Assignments
- Computer ScienceNeurIPS
- 2020
A (randomized) algorithm for reviewer assignment is presented that can optimally solve the reviewer-assignment problem under any given constraints on the probability of assignment for any reviewer-paper pair.
PeerNomination: A novel peer selection algorithm to handle strategic and noisy assessments
- Computer ScienceArtif. Intell.
- 2023
Peer Selection with Noisy Assessments
- Computer ScienceArXiv
- 2021
This paper extends PeerN nomination, the most accurate peer reviewing algorithm to date, into WeightedPeerNomination, which is able to handle noisy and inaccurate agents, and explicitly formulate assessors’ reliability weights in a way that does not violate strategyproofness.
Group Fairness in Peer Review
- Computer Science
- 2022
A simple peer review model is studied, it is proved that it always admits a reviewing assignment in the core, and an efficient algorithm is designed to find one such assignment and it is observed that the algorithm, in addition to satisfying thecore, generates good social welfare on average.
A Dataset on Malicious Paper Bidding in Peer Review
- Computer ScienceArXiv
- 2022
A descriptive analysis of the bidding behavior, including the categorization of different strategies employed by participants, and the performance of some simple algorithms meant to detect malicious bidding are evaluated.
Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and Conference Experiment Design
- Computer ScienceAAMAS
- 2022
It is proved that when the set of papers requiring additional review is unknown, a simplified variant of this problem is NP-hard, and it is empirically shown that across several datasets pertaining to real conference data, dividing reviewers between phases/conditions uniformly at random allows an assignment that is nearly as good as the oracle optimal assignment.
An automated conflict of interest based greedy approach for conference paper assignment system
- Computer ScienceJ. Informetrics
- 2020
No Agreement Without Loss: Learning and Social Choice in Peer Review
- EconomicsArXiv
- 2022
In peer review systems, reviewers are often asked to evaluate various features of submissions, such as technical quality or novelty. A score is given to each of the predefined features and based on…
References
SHOWING 1-10 OF 64 REFERENCES
Peer-review in a world with rational scientists: Toward selection of the average
- Economics
- 2010
It is found that a small fraction of incorrect (selfish or rational) referees is sufficient to drastically lower the quality of the published (accepted) scientific standard.
Strategyproof peer selection using randomization, partitioning, and apportionment
- Computer ScienceArtif. Intell.
- 2019
PeerReview4All: Fair and Accurate Reviewer Assignment in Peer Review
- Computer ScienceALT
- 2019
A fairness objective is to maximize the review quality of the most disadvantaged paper, in contrast to the commonly used objective of maximizing the total quality over all papers, and an assignment algorithm based on an incremental max-flow procedure is designed that is near-optimally fair.
A SUPER* Algorithm to Optimize Paper Bidding in Peer Review
- Computer ScienceUAI
- 2020
An algorithm called SUPER*, inspired by the A* algorithm, is presented, which considerably outperforms baselines deployed in existing systems, consistently reducing the number of papers with fewer than requisite bids by 50-75% or more, and is also robust to various real world complexities.
Incentive Design in Peer Review: Rating and Repeated Endogenous Matching
- EconomicsIEEE Transactions on Network Science and Engineering
- 2019
The proposed matching rules are easy to implement and require no knowledge about agents’ private information and are effective in guiding the system to an equilibrium where the agents are incentivized to exert high effort and receive ratings that precisely reflect their review quality.
On Testing for Biases in Peer Review
- MathematicsNeurIPS
- 2019
A general framework for testing for biases in (single vs. double blind) peer review is presented, and a hypothesis test with guaranteed control over false alarm probability and non-trivial power is presented.
Sum of us: strategyproof selection from the selectors
- Computer ScienceTARK XIII
- 2011
A randomized strategyproof mechanism is presented that provides an approximation ratio that is bounded from above by four for any value of k, and approaches one as k grows.
Impartial Peer Review
- PsychologyIJCAI
- 2015
This work designs an impartial mechanism that selects a k-subset of proposals that is nearly as highly rated as the one selected by the non-impartial (abstract version of) the NSF pilot mechanism, even when the latter mechanism has the "unfair" advantage of eliciting honest reviews.
How to Calibrate the Scores of Biased Reviewers by Quadratic Programming
- PsychologyAAAI
- 2011
A method is proposed for calibrating the scores of reviewers that are potentially biased and blindfolded by having only partial information using a maximum likelihood estimator, which yields a quadratic program whose solution transforms the individual review scores into calibrated, globally comparable scores.
Choosing How to Choose Papers
- Computer ScienceArXiv
- 2018
This paper presents a framework based on L(p,q)-norm empirical risk minimization for learning the community's aggregate mapping, and characterize $p=q=1$ as the only choice that satisfies three natural axiomatic properties.