Corpus ID: 231740610

Can We Automate Scientific Reviewing?

@article{Yuan2021CanWA,
  title={Can We Automate Scientific Reviewing?},
  author={Weizhe Yuan and Pengfei Liu and Graham Neubig},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.00176}
}
The rapid development of science and technology has been accompanied by an exponential growth in peer-reviewed scientific publications. At the same time, the review of each paper is a laborious process that must be carried out by subject matter experts. Thus, providing high-quality reviews of this growing number of papers is a significant challenge. In this work, we ask the question “can we automate scientific reviewing?”, discussing the possibility of using state-of-the-art natural language… Expand
Automated scholarly paper review: Possibility and challenges
TLDR
This review paper proposes the concept of automated scholarly paper review (ASPR) and reviews the relevant literature and technologies to discuss the possibility of achieving a full-scale computerized review process and concludes there are already corresponding research and technologies at each stage of ASPR. Expand
A Dataset for Discourse Structure in Peer Review Discussions
TLDR
It is shown that discourse cues from rebuttals can shed light on the quality and interpretation of reviews, and a new labeled dataset of 20k sentences contained in 506 review-rebuttal pairs in English, annotated by experts is presented. Expand
We Can Explain Your Research in Layman's Terms: Towards Automating Science Journalism at Scale
TLDR
This work creates a specialized dataset that contains scientific papers and their Science Daily press releases, and demonstrates numerous sequence to sequence (seq2seq) applications using Science Daily with the aim of facilitating further research on language generation. Expand
BARTScore: Evaluating Generated Text as Text Generation
TLDR
This work conceptualizes the evaluation of generated text as a text generation problem, modeled using pre-trained sequence-to-sequence models, and proposes a metric BARTSCORE with a number of variants that can be flexibly applied in an unsupervised fashion to evaluation of text from different perspectives. Expand

References

SHOWING 1-10 OF 65 REFERENCES
A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications
TLDR
The first public dataset of scientific peer reviews available for research purposes (PeerRead v1) is presented and it is shown that simple models can predict whether a paper is accepted with up to 21% error reduction compared to the majority baseline. Expand
A System for Summarizing Scientific Topics Starting from Keywords
In this paper, we investigate the problem of automatic generation of scientific surveys starting from keywords provided by a user. We present a system that can take a topic query as input andExpand
Sentiment Analysis of Peer Review Texts for Scholarly Papers
TLDR
This paper investigates the task of automatically predicting the overall recommendation/decision and further identifying the sentences with positive and negative sentiment polarities from a peer review text written by a reviewer for a paper submission and proposes a multiple instance learning network with a novel abstract-based memory mechanism (MILAM) to address this challenging task. Expand
ScisummNet: A Large Annotated Corpus and Content-Impact Models for Scientific Paper Summarization with Citation Networks
TLDR
The first large-scale manually-annotated corpus for scientific papers is developed and released by enabling faster annotation and summarization methods that integrate the authors’ original highlights and the article’s actual impacts on the community are proposed, to create comprehensive, hybrid summaries. Expand
Aspect-based Sentiment Analysis of Scientific Reviews
TLDR
An active learning framework is used to build a training dataset for aspect prediction, which is further used to obtain the aspects and sentiments for the entire dataset, and shows that the distribution of aspect-based sentiments obtained from a review is significantly different for accepted and rejected papers. Expand
Paper Gestalt
Peer reviews of conference paper submissions is an integral part of the research cycle, though it has unknown origins. For the computer vision community, this process has become significantly moreExpand
Does My Rebuttal Matter? Insights from a Major NLP Conference
TLDR
The results suggest that a reviewer’s final score is largely determined by her initial score and the distance to the other reviewers’ initial scores, which could help better assess the usefulness of the rebuttal phase in NLP conferences. Expand
Automatic Generation of Citation Texts in Scholarly Papers: A Pilot Study
TLDR
This pilot study confirms the feasibility of automatically generating citation texts in scholarly papers and the technique has the great potential to help researchers prepare their scientific papers. Expand
On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
TLDR
A simple extractive step is performed before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with Generating a summary. Expand
Surveyor: A System for Generating Coherent Survey Articles for Scientific Topics
TLDR
An extractive summarization algorithm that combines a content model with a discourse model to generate coherent and readable summaries of scientific topics using text from scientific articles relevant to the topic is introduced. Expand
...
1
2
3
4
5
...