Asking It All: Generating Contextualized Questions for any Semantic Role

@article{Pyatkin2021AskingIA,
  title={Asking It All: Generating Contextualized Questions for any Semantic Role},
  author={Valentina Pyatkin and Paul Roit and Julian Michael and Reut Tsarfaty and Yoav Goldberg and Ido Dagan},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.04832}
}
Asking questions about a situation is an inherent step towards understanding it. To this end, we introduce the task of role question generation, which, given a predicate mention and a passage, requires producing a set of questions asking about all possible semantic roles of the predicate. We develop a two-stage model for this task, which first produces a context-independent question prototype for each role and then revises it to be contextually appropriate for the passage. Unlike most existing… 

Figures and Tables from this paper

Competence-based Question Generation

This work defines competence-based (CB) question generation, and focuses on queries over lexical semantic knowledge involving implicit argument and subevent structure of verbs.

Dense Paraphrasing for Textual Enrichment

This paper builds the first complete DP dataset, provides the scope and de-sign of the annotation task, and presents results demonstrating how this DP process can enrich a source text to improve inferencing and Question Answering (QA) task performance.

“What makes a question inquisitive?” A Study on Type-Controlled Inquisitive Question Generation

This work annotates an inquisitive question dataset with question types, train question type classifiers, and finetune models for type-controlled question generation, and finds that the ranker chooses questions with the best syntax, semantics, and inquisitiveness, even rivaling the performance of human-written questions.

Question Generation and Answering for exploring Digital Humanities collections

This paper proposes a new approach for question generation, relying on a BART Transformer based generative model, for which input data are enriched by semantic constraints, and is validated on a new corpus of digitized archive collection of a French Social Science journal.

SemEval-2022 Task 9: R2VQ – Competence-based Multimodal Question Answering

A Competence-based Question Answering challenge, designed to involve rich semantic annotation and aligned text-video objects, to answer questions from a collection of cooking recipes and videos reflecting a specific reasoning competence.

Conditional Generation with a Question-Answering Blueprint

This work proposes a new conceptualization of text plans as a sequence of question-answer (QA) pairs, enhancing existing datasets with a QA blueprint operating as a proxy for both content selection and planning.

Ask to Understand: Question Generation for Multi-hop Question Answering

This paper carefully design an endto-end QG module on the basis of a classical QA module, which could help the model understand the context by asking inherently logical sub-questions, thus inheriting interpretability from the QD-based method and showing superior performance.

Generating Literal and Implied Subquestions to Fact-check Complex Claims

This work focuses on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers help identify relevant evidence to fact-check the full claim and derive the veracity through their answers, sug-gesting that they can be useful pieces of a fact-checking pipeline.

Generative Language Models for Paragraph-Level Question Generation

QG-Bench is introduced, a multilingual and multidomain benchmark for QG that converts existing question answering datasets by converting them to a standard QG setting and proposes robust QG baselines based on generative language models.

Selecting Better Samples from Pre-trained LLMs: A Case Study on Question Generation

This case study framed in the context of question generation proposes two prompt-based approaches to selecting high-quality questions from a set of LLM-generated candidates and empirically demon-strate that the proposed approach can effectively select questions of higher qualities than greedy generation.

References

SHOWING 1-10 OF 32 REFERENCES

Question-Answer Driven Semantic Role Labeling: Using Natural Language to Annotate Natural Language

The results show that non-expert annotators can produce high quality QA-SRL data, and also establish baseline performance levels for future work on this task, and introduce simple classifierbased models for predicting which questions to ask and what their answers should be.

Syn-QG: Syntactic and Shallow Semantic Rules for Question Generation

Question Generation (QG) is fundamentally a simple syntactic transformation; however, many aspects of semantics influence what questions are good to form. We implement this observation by developing

Inducing Semantic Roles Without Syntax

It is shown it is possible to automatically induce semantic roles from QA-SRL, a scalable and ontology-free semantic annotation scheme that uses question-answer pairs to represent predicate-argument structure, and this method outperforms all previous models as well as a new state-of-the-art baseline over gold syntax.

QANom: Question-Answer driven SRL for Nominalizations

We propose a new semantic scheme for capturing predicate-argument relations for nominalizations, termed QANom. This scheme extends the QA-SRL formalism (He et al., 2015), modeling the relations

Asking Clarifying Questions in Open-Domain Information-Seeking Conversations

This paper formulate the task of asking clarifying questions in open-domain information-seeking conversational systems, propose an offline evaluation methodology for the task, and collect a dataset, called Qulac, through crowdsourcing, which significantly outperforms competitive baselines.

SemEval-2010 Task 10: Linking Events and Their Participants in Discourse

In the shared task, this work looked at one particular aspect of cross-sentence links between argument structures, namely linking locally uninstantiated roles to their co-referents in the wider discourse context (if such co- Referents exist).

The Proposition Bank: An Annotated Corpus of Semantic Roles

An automatic system for semantic role tagging trained on the corpus is described and the effect on its performance of various types of information is discussed, including a comparison of full syntactic parsing with a flat representation and the contribution of the empty trace categories of the treebank.

Predicate-specific Annotations for Implicit Role Binding: Corpus Annotation, Data Analysis and Evaluation Experiments

A corpus of predicate-specific annotations for verbs in the FrameNet paradigm that are aligned with PropBank and VerbNet are presented and a qualitative data analysis leads to observations regarding implicit role realization that can guide further annotation efforts.

PropBank: Semantics of New Predicate Types

This research focuses on expanding PropBank, a corpus annotated with predicate argument structures, with new predicate types; namely, noun, adjective and complex predicates, such as Light Verb Constructions, in order for PropBank to reach the same level of coverage and continue to serve as the bedrock for Abstract Meaning Representation.

QADiscourse - Discourse Relations as QA Pairs: Representation, Crowdsourcing and Baselines

A novel representation of discourse relations as QA pairs is proposed, which in turn allows us to crowd-source wide-coverage data annotated with discourse relations, via an intuitively appealing interface for composing such questions and answers.