On Strategyproof Conference Peer Review

  title={On Strategyproof Conference Peer Review},
  author={Yichong Xu and H. Zhao and Xiaofei Shi and Nihar B. Shah},
  booktitle={International Joint Conference on Artificial Intelligence},
We consider peer review under a conference setting where there are conflicts between the reviewers and the submissions. Under such conflicts, reviewers can manipulate their reviews in a strategic manner to influence the final rankings of their own papers. Present-day peer-review systems are not designed to guard against such strategic behavior, beyond minimal (and insufficient) checks such as not assigning a paper to a conflicted reviewer. In this work, we address this problem through the lens… 

Tables from this paper

The Price of Strategyproofing Peer Assessment

Strategic behavior is a fundamental problem in a variety of real-world applications that require some form of peer assessment, such as peer grading of assignments, grant proposal review, conference

Strategyproofing Peer Assessment via Partitioning: The Price in Terms of Evaluators' Expertise

This paper analyzes the price of strategyproofness: that is, the amount of compromise on the assigned evaluators' expertise required in order to get strategyProofness, and establishes several polynomial-time algorithms for strategyproof assignment along with assignment-quality guarantees.

Mitigating Manipulation in Peer Review via Randomized Reviewer Assignments

A (randomized) algorithm for reviewer assignment is presented that can optimally solve the reviewer-assignment problem under any given constraints on the probability of assignment for any reviewer-paper pair.

Combating Collusion Rings is Hard but Possible

It is shown that, in some realistic settings, an assignment without any review cycles of small length always exists, and this result also gives rise to an efficient heuristic for computing (weighted) cycle-free review assignments, which are shown to be of excellent quality in practice.

Group Fairness in Peer Review

A simple peer review model is studied, it is proved that it always admits a reviewing assignment in the core, and an efficient algorithm is designed to find one such assignment and it is observed that the algorithm, in addition to satisfying thecore, generates good social welfare on average.

Making Paper Reviewing Robust to Bid Manipulation Attacks

This paper develops a novel approach for paper bidding and assignment that is much more robust against bid manipulation attacks and shows empirically that this approach provides robustness even when dishonest reviewers collude, have full knowledge of the assignment system's internal workings, and have access to the system’s inputs.

Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and Conference Experiment Design

It is proved that when the set of papers requiring additional review is unknown, a simplified variant of this problem is NP-hard, and it is empirically shown that across several datasets pertaining to real conference data, dividing reviewers between phases/conditions uniformly at random allows an assignment that is nearly as good as the oracle optimal assignment.

No Agreement Without Loss: Learning and Social Choice in Peer Review

In peer review systems, reviewers are often asked to evaluate various features of submissions, such as technical quality or novelty. A score is given to each of the predefined features and based on

A Market-Inspired Bidding Scheme for Peer Review Paper Assignment

It is shown that by assigning `budgets' to reviewers and a `price' for every paper that is (roughly) proportional to its demand, the best response of a reviewer is to bid sincerely, i.e., on her most favorite papers, and match the budget even when it is not enforced.



Peer-review in a world with rational scientists: Toward selection of the average

It is found that a small fraction of incorrect (selfish or rational) referees is sufficient to drastically lower the quality of the published (accepted) scientific standard.

Strategyproof Peer Selection: Mechanisms, Analyses, and Experiments

This work proposes a new strategyproof (impartial) mechanism called Dollar Partition that satisfies desirable axiomatic properties and shows that this mechanism performs better on average and in the worst case, than other strategyproof mechanisms in the literature.

PeerReview4All: Fair and Accurate Reviewer Assignment in Peer Review

A fairness objective is to maximize the review quality of the most disadvantaged paper, in contrast to the commonly used objective of maximizing the total quality over all papers, and an assignment algorithm based on an incremental max-flow procedure is designed that is near-optimally fair.

Incentive Design in Peer Review: Rating and Repeated Endogenous Matching

The proposed matching rules are easy to implement and require no knowledge about agents’ private information and are effective in guiding the system to an equilibrium where the agents are incentivized to exert high effort and receive ratings that precisely reflect their review quality.

On Testing for Biases in Peer Review

A general framework for testing for biases in (single vs. double blind) peer review is presented, and a hypothesis test with guaranteed control over false alarm probability and non-trivial power is presented.

Sum of us: strategyproof selection from the selectors

A randomized strategyproof mechanism is presented that provides an approximation ratio that is bounded from above by four for any value of k, and approaches one as k grows.

Impartial Peer Review

This work designs an impartial mechanism that selects a k-subset of proposals that is nearly as highly rated as the one selected by the non-impartial (abstract version of) the NSF pilot mechanism, even when the latter mechanism has the "unfair" advantage of eliciting honest reviews.

How to Calibrate the Scores of Biased Reviewers by Quadratic Programming

A method is proposed for calibrating the scores of reviewers that are potentially biased and blindfolded by having only partial information using a maximum likelihood estimator, which yields a quadratic program whose solution transforms the individual review scores into calibrated, globally comparable scores.

Choosing How to Choose Papers

This paper presents a framework based on L(p,q)-norm empirical risk minimization for learning the community's aggregate mapping, and characterize $p=q=1$ as the only choice that satisfies three natural axiomatic properties.