Question-Answer Driven Semantic Role Labeling: Using Natural Language to Annotate Natural Language

@inproceedings{He2015QuestionAnswerDS,
  title={Question-Answer Driven Semantic Role Labeling: Using Natural Language to Annotate Natural Language},
  author={Luheng He and Mike Lewis and Luke Zettlemoyer},
  booktitle={EMNLP},
  year={2015}
}
This paper introduces the task of questionanswer driven semantic role labeling (QA-SRL), where question-answer pairs are used to represent predicate-argument structure. [] Key Method It also allows for scalable data collection by annotators with very little training and no linguistic expertise. We gather data in two domains, newswire text and Wikipedia articles, and introduce simple classifierbased models for predicting which questions to ask and what their answers should be. Our results show that non-expert…

ASQ: Automatically Generating Question-Answer Pairs using AMRs

TLDR
This work introduces ASQ, a tool to automatically mine questions and answers from a sentence, using its Abstract Meaning Representation (AMR), making it faster and costeffective, without compromising on the quality and validity of the question-answer pairs thus obtained.

Annotating and Modeling Shallow Semantics Directly from Text

TLDR
This thesis introduces question-answer driven semantic role labeling (QA-SRL), an annotation framework that allows us to gather SRL information from non-expert annotators, and develops two general-purpose, syntax-independent neural models that lead to significant performance gains.

Incidental Supervision from Question-Answering Signals

TLDR
This paper studies the case where the annotations are in the format of question-answering (QA) and proposes an effective way to learn useful representations for other tasks and finds that the representation retrieved from question-answer meaning representation (QAMR) data can almost universally improve on a wide range of tasks.

An MRC Framework for Semantic Role Labeling

TLDR
This paper formalizes predicate disambiguation as multiple-choice machine reading comprehension, where the descriptions of candidate senses of a given predicate are used as options to select the correct sense.

Crowdsourcing Question-Answer Meaning Representations

TLDR
A crowdsourcing scheme is developed to show that QAMRs can be labeled with very little training, and a qualitative analysis demonstrates that the crowd-generated question-answer pairs cover the vast majority of predicate-argument relationships in existing datasets.

QANom: Question-Answer driven SRL for Nominalizations

We propose a new semantic scheme for capturing predicate-argument relations for nominalizations, termed QANom. This scheme extends the QA-SRL formalism (He et al., 2015), modeling the relations

Asking It All: Generating Contextualized Questions for any Semantic Role

TLDR
The task of role question generation is introduced, which requires producing a set of questions asking about all possible semantic roles of the predicate, and a two-stage model is developed, which first produces a context-independent question prototype for each role and then revises it to be contextually appropriate for the passage.

Knowledge-based Supervision for Domain-adaptive Semantic Role Labeling

TLDR
Linked lexical knowledge bases are used as a basis for automatic training data generation across languages and domains to improve lexicon coverage and training data coverage for SRL.

Semantic Role Labeling with Pretrained Language Models for Known and Unknown Predicates

TLDR
The first full pipeline for semantic role labelling of Russian texts is built, and it is shown that embeddings generated by deep pretrained language models are superior to classical shallowembeddings for argument classification of both “known” and “unknown” predicates.

Inducing Semantic Roles Without Syntax

TLDR
It is shown it is possible to automatically induce semantic roles from QA-SRL, a scalable and ontology-free semantic annotation scheme that uses question-answer pairs to represent predicate-argument structure, and this method outperforms all previous models as well as a new state-of-the-art baseline over gold syntax.
...

References

SHOWING 1-10 OF 35 REFERENCES

Developing a large semantically annotated corpus

TLDR
It is argued that a bootstrapping approach comprising state-of-the-art NLP tools for parsing and semantic interpretation, in combination with a wiki-like interface for collaborative annotation of experts, and a game with a purpose for crowdsourcing, are the starting ingredients for fulfilling this enterprise.

The Proposition Bank: An Annotated Corpus of Semantic Roles

TLDR
An automatic system for semantic role tagging trained on the corpus is described and the effect on its performance of various types of information is discussed, including a comparison of full syntactic parsing with a flat representation and the contribution of the empty trace categories of the treebank.

Dependency-based Semantic Role Labeling of PropBank

TLDR
This work presents a PropBank semantic role labeling system for English that is integrated with a dependency parser and is the first dependency-based semantic role labeler for PropBank that rivals constituent-based systems in terms of performance.

Semantic Role Labeling

TLDR
This chapter presents the application of the ETL approach to semantic role labeling (SRL) and evaluates the performance of ETL over two English language corpora: CoNLL-2004 and CoNll-2005.

Recognizing Implied Predicate-Argument Relationships in Textual Inference

TLDR
This work investigates implied predicate-argument relationships which are not explicitly expressed in syntactic structure in the context of textual inference scenarios, and provides a large and freely available evaluation dataset for the task setting and proposes methods to cope with it.

A Bayesian Approach to Unsupervised Semantic Role Induction

TLDR
Two Bayesian models for unsupervised semantic role labeling (SRL) task are introduced, with the coupled model consistently outperforming the factored counterpart in all experimental set-ups.

Calibrating Features for Semantic Role Labeling

This paper takes a critical look at the features used in the semantic role tagging literature and show that the information in the input, generally a syntactic parse tree, has yet to be fully

From TreeBank to PropBank

TLDR
This paper describes the approach to the development of a Proposition Bank, which involves the addition of semantic information to the Penn English Treebank and introduces metaframes as a technique for handling similar frames among near− synonymous verbs.

Joint A* CCG Parsing and Semantic Role Labelling

TLDR
A joint model using CCG is introduced, which is motivated by the close link between CCG syntax and semantics and is the first to substantially improve both syntactic and semantic accuracy over a comparable pipeline, and also achieves state-of-the-art results for a nonensemble semantic role labelling model.

The CoNLL-2009 Shared Task: Syntactic and Semantic Dependencies in Multiple Languages

TLDR
This shared task combines the shared tasks of the previous five years under a unique dependency-based formalism similar to the 2008 task and describes how the data sets were created and show their quantitative properties.