Syn-QG: Syntactic and Shallow Semantic Rules for Question Generation

  title={Syn-QG: Syntactic and Shallow Semantic Rules for Question Generation},
  author={Kaustubh D. Dhole and Christopher D. Manning},
  booktitle={Annual Meeting of the Association for Computational Linguistics},
Question Generation (QG) is fundamentally a simple syntactic transformation; however, many aspects of semantics influence what questions are good to form. We implement this observation by developing Syn-QG, a set of transparent syntactic rules leveraging universal dependencies, shallow semantic parsing, lexical resources, and custom rules which transform declarative sentences into question-answer pairs. We utilize PropBank argument descriptions and VerbNet state predicates to incorporate… 

Figures and Tables from this paper

Competence-based Question Generation

This work defines competence-based (CB) question generation, and focuses on queries over lexical semantic knowledge involving implicit argument and subevent structure of verbs.

Improving Unsupervised Question Answering via Summarization-Informed Question Generation

A distantly-supervised QG method which uses questions generated heuristically from summaries as a source of training data for a QG system, and substantially outperforms previous unsupervised models on three in- domain datasets and three out-of-domain datasets.

Dense Paraphrasing for Textual Enrichment

This paper builds the first complete DP dataset, provides the scope and de-sign of the annotation task, and presents results demonstrating how this DP process can enrich a source text to improve inferencing and Question Answering (QA) task performance.

Asking It All: Generating Contextualized Questions for any Semantic Role

The task of role question generation is introduced, which requires producing a set of questions asking about all possible semantic roles of the predicate, and a two-stage model is developed, which first produces a context-independent question prototype for each role and then revises it to be contextually appropriate for the passage.

Enhancing Question Generation with Commonsense Knowledge

Ex-perimental results on SQuAD show that the proposed methods are able to noticeably improve the QG performance on both automatic and human evaluation metrics demonstrating that incor-porating external commonsense knowledge with multi-task learning can help the model generate human-like and high-quality questions.

Enhancing Question Generation with Commonsense Knowledge

Experimental results on SQuAD show that the proposed methods are able to noticeably improve the QG performance on both automatic and human evaluation metrics, demonstrating that incorporating external commonsense knowledge with multi-task learning can help the model generate human-like and high-quality questions.

Automatically Generating Cause-and-Effect Questions from Passages

This work builds a pipeline that extracts causal relations from passages of input text, and feeds these as input to a state-of-the-art neural question generator, resulting in a new, publicly available collection of cause-and-effect questions.

Predicate Representations and Polysemy in VerbNet Semantic Parsing

Despite recent advances in semantic role labeling propelled by pre-trained text encoders like BERT, performance lags behind when applied to predicates observed infrequently during training or to

Improving Neural Question Generation using Deep Linguistic Representation

The experimental results demonstrate that the proposed approach outperforms the state-of-the-art QG systems, and significantly improves the baseline by 17.2% and 6.

Expanding, Retrieving and Infilling: Diversifying Cross-Domain Question Generation with Flexible Templates

A novel framework is proposed by expanding, retrieving, and infilling that first incorporates flexible templates with a neural-based model to generate diverse expressions of questions with sentence structure guidance and shows the superiority of the question generation method in producing more diverse questions while maintaining high quality and consistency under both automatic evaluation and human evaluation.



Question Generation with Minimal Recursion Semantics

The performance of proposed method is compared against other syntax and rule based systems, and the result reveals the challenges of current research on question generation and indicates direction for future work.

Improving Question Generation With to the Point Context

This work proposes a method to jointly model the unstructured sentence and the structured answer-relevant relation (extracted from the sentence in advance) for question generation and shows that to the point context helps the question generation model achieve significant improvements on several automatic evaluation metrics.

Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering

This paper proposes two semantics-enhanced rewards obtained from downstream question paraphrasing and question answering tasks to regularize the QG model to generate semantically valid questions, and proposes a QA-based evaluation method which measures the model’s ability to mimic human annotators in generating QA training data.

Learning to Ask: Neural Question Generation for Reading Comprehension

An attention-based sequence learning model for the task and the effect of encoding sentence- vs. paragraph-level information is investigated and results show that the system significantly outperforms the state-of-the-art rule-based system.

Question Generation via Overgenerating Transformations and Ranking

This framework for question generation composes general-purpose rules to transform declarative sentences into questions, is modular in that existing NLP tools can be leveraged, and includes a statistical component for scoring questions based on features of the input, output, and transformations performed.

Leveraging Multiple Views of Text for Automatic Question Generation

This work explores using multiple views from different parsers to create a tree structure which represents items of interest for question generation, which resulted in a 17% reduction in the error rate compared with the prior work.

A Semantic Role-based Approach to Open-Domain Automatic Question Generation

A novel rule-based system for automatic generation of factual questions from sentences, using semantic role labeling (SRL) as the main form of text analysis, which outperforms the neural system in both average quality and variety of generated questions.

Neural Question Generation from Text: A Preliminary Study

A preliminary study on neural question generation from text with the SQuAD dataset is conducted, and the experiment results show that the method can produce fluent and diverse questions.

Verbnet: a broad-coverage, comprehensive verb lexicon

VerbNet is created, a verb lexicon compatible with Word-Net but with explicitly stated syntactic and semantic information, using Levin verb classes to systematically construct lexical entries, to address the gap in coverage of syntactic frames and predicate argument structures associated with individual verb senses.

Asking Questions the Human Way: Scalable Question-Answer Generation from Text Corpus

Answer-Clue-Style-aware Question Generation (ACS-QG), which aims at automatically generating high-quality and diverse question-answer pairs from unlabeled text corpus at scale by imitating the way a human asks questions, dramatically outperforms state-of-the-art neural question generation models in terms of the generation quality.