On Strategyproof Conference Peer Review

  title={On Strategyproof Conference Peer Review},
  author={Yichong Xu and H. Zhao and Xiaofei Shi and Nihar B. Shah},
  booktitle={International Joint Conference on Artificial Intelligence},
We consider peer review under a conference setting where there are conflicts between the reviewers and the submissions. Under such conflicts, reviewers can manipulate their reviews in a strategic manner to influence the final rankings of their own papers. Present-day peer-review systems are not designed to guard against such strategic behavior, beyond minimal (and insufficient) checks such as not assigning a paper to a conflicted reviewer. In this work, we address this problem through the lens… 

Tables from this paper

The Price of Strategyproofing Peer Assessment

Strategic behavior is a fundamental problem in a variety of real-world applications that require some form of peer assessment, such as peer grading of assignments, grant proposal review, conference

Strategyproofing Peer Assessment via Partitioning: The Price in Terms of Evaluators' Expertise

This paper analyzes the price of strategyproofness: that is, the amount of compromise on the assigned evaluators' expertise required in order to get strategyProofness, and establishes several polynomial-time algorithms for strategyproof assignment along with assignment-quality guarantees.

Mitigating Manipulation in Peer Review via Randomized Reviewer Assignments

A (randomized) algorithm for reviewer assignment is presented that can optimally solve the reviewer-assignment problem under any given constraints on the probability of assignment for any reviewer-paper pair.

Peer Selection with Noisy Assessments

This paper extends PeerN nomination, the most accurate peer reviewing algorithm to date, into WeightedPeerNomination, which is able to handle noisy and inaccurate agents, and explicitly formulate assessors’ reliability weights in a way that does not violate strategyproofness.

Group Fairness in Peer Review

A simple peer review model is studied, it is proved that it always admits a reviewing assignment in the core, and an efficient algorithm is designed to find one such assignment and it is observed that the algorithm, in addition to satisfying thecore, generates good social welfare on average.

A Dataset on Malicious Paper Bidding in Peer Review

A descriptive analysis of the bidding behavior, including the categorization of different strategies employed by participants, and the performance of some simple algorithms meant to detect malicious bidding are evaluated.

Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and Conference Experiment Design

It is proved that when the set of papers requiring additional review is unknown, a simplified variant of this problem is NP-hard, and it is empirically shown that across several datasets pertaining to real conference data, dividing reviewers between phases/conditions uniformly at random allows an assignment that is nearly as good as the oracle optimal assignment.

No Agreement Without Loss: Learning and Social Choice in Peer Review

In peer review systems, reviewers are often asked to evaluate various features of submissions, such as technical quality or novelty. A score is given to each of the predefined features and based on



Peer-review in a world with rational scientists: Toward selection of the average

It is found that a small fraction of incorrect (selfish or rational) referees is sufficient to drastically lower the quality of the published (accepted) scientific standard.

PeerReview4All: Fair and Accurate Reviewer Assignment in Peer Review

A fairness objective is to maximize the review quality of the most disadvantaged paper, in contrast to the commonly used objective of maximizing the total quality over all papers, and an assignment algorithm based on an incremental max-flow procedure is designed that is near-optimally fair.

A SUPER* Algorithm to Optimize Paper Bidding in Peer Review

An algorithm called SUPER*, inspired by the A* algorithm, is presented, which considerably outperforms baselines deployed in existing systems, consistently reducing the number of papers with fewer than requisite bids by 50-75% or more, and is also robust to various real world complexities.

Incentive Design in Peer Review: Rating and Repeated Endogenous Matching

The proposed matching rules are easy to implement and require no knowledge about agents’ private information and are effective in guiding the system to an equilibrium where the agents are incentivized to exert high effort and receive ratings that precisely reflect their review quality.

On Testing for Biases in Peer Review

A general framework for testing for biases in (single vs. double blind) peer review is presented, and a hypothesis test with guaranteed control over false alarm probability and non-trivial power is presented.

Sum of us: strategyproof selection from the selectors

A randomized strategyproof mechanism is presented that provides an approximation ratio that is bounded from above by four for any value of k, and approaches one as k grows.

Impartial Peer Review

This work designs an impartial mechanism that selects a k-subset of proposals that is nearly as highly rated as the one selected by the non-impartial (abstract version of) the NSF pilot mechanism, even when the latter mechanism has the "unfair" advantage of eliciting honest reviews.

How to Calibrate the Scores of Biased Reviewers by Quadratic Programming

A method is proposed for calibrating the scores of reviewers that are potentially biased and blindfolded by having only partial information using a maximum likelihood estimator, which yields a quadratic program whose solution transforms the individual review scores into calibrated, globally comparable scores.

Choosing How to Choose Papers

This paper presents a framework based on L(p,q)-norm empirical risk minimization for learning the community's aggregate mapping, and characterize $p=q=1$ as the only choice that satisfies three natural axiomatic properties.