Corpus ID: 52011757

A framework for automatic question generation from text using deep reinforcement learning

@article{Kumar2018AFF,
  title={A framework for automatic question generation from text using deep reinforcement learning},
  author={Vishwajeet Kumar and Ganesh Ramakrishnan and Yuan-Fang Li},
  journal={ArXiv},
  year={2018},
  volume={abs/1808.04961}
}
Automatic question generation (QG) is a useful yet challenging task in NLP. [...] Key Method The overall model is trained by learning the parameters of the generator network which maximizes the reward. Our framework allows us to directly optimize any task-specific score including evaluation measures such as BLEU, GLEU, ROUGE-L, {\em etc.}, suitable for sequence to sequence tasks such as QG. Our comprehensive evaluation shows that our approach significantly outperforms state-of-the-art systems on the widely-used…Expand
Reinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation
TLDR
This model consists of a Graph2Seq generator with a novel Bidirectional Gated Graph Neural Network based encoder to embed the passage, and a hybrid evaluator with a mixed objective combining both cross-entropy and RL losses to ensure the generation of syntactically and semantically valid text. Expand
NATURAL QUESTION GENERATION
Natural question generation (QG) aims to generate questions from a passage and an answer. Previous works on QG either (i) ignore the rich structure information hidden in text, (ii) solely rely onExpand
Answer-driven Deep Question Generation based on Reinforcement Learning
TLDR
An Answer-driven Deep Question Generation model based on the encoder-decoder framework with a semantic-rich fusion attention mechanism is designed to support the decoding process, which integrates the answer with the document representations to promote the proper handling of answer information during generation. Expand
Evaluating BERT-based Rewards for Question Generation with Reinforcement Learning
  • Peide Zhu, C. Hauff
  • Computer Science
  • ICTIR
  • 2021
TLDR
This paper first categorize existing rewards systematically, then provides a fair empirical evaluation of different reward functions (including three proposed here for QG) in a common framework and finds rewards that model answerability to be the most effective. Expand
Natural Question Generation with Reinforcement Learning Based Graph-to-Sequence Model
TLDR
This paper proposes a novel reinforcement learning (RL) based graph-to-sequence (Graph2Seq) model for QG that outperforms previous state-of-the-art methods by a large margin on the SQuAD dataset. Expand
Neural Question Generation with Answer Pivot
TLDR
This paper treats the answers as the hidden pivot for question generation and combines the questiongeneration and answer selection process in a joint model and achieves the state-of-the-art result on the SQuAD dataset according to automatic metric and human evaluation. Expand
Vocabulary Matters: A Simple yet Effective Approach to Paragraph-level Question Generation
TLDR
A basic sequence-to-sequence QG model is augmented with dynamic, paragraph-specific dictionary and copy attention that is persistent across the corpus, without requiring features generated by sophisticated NLP pipelines or handcrafted rules. Expand
Neural Text Generation from Structured and Unstructured Data
TLDR
This thesis considers neural table-to-text generation and neural question generation tasks for text generation from structured and unstructured data respectively and shows that simpler, properly tuned models are at least competitive across several natural language processing tasks. Expand
Evaluating Rewards for Question Generation Models
TLDR
It is confirmed that training with policy gradient methods leads to increases in the metrics used as rewards, and it is shown that although these metrics have previously been assumed to be good proxies for question quality, they are poorly aligned with human judgement and the model simply learns to exploit the weaknesses of the reward source. Expand
Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering
TLDR
This paper proposes two semantics-enhanced rewards obtained from downstream question paraphrasing and question answering tasks to regularize the QG model to generate semantically valid questions, and proposes a QA-based evaluation method which measures the model’s ability to mimic human annotators in generating QA training data. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 21 REFERENCES
Automating Reading Comprehension by Generating Question and Answer Pairs
TLDR
A novel two-stage process to generate question-answer pairs from the text using sequence to sequence models and global attention and answer encoding for generating the question most relevant to the answer is presented. Expand
Learning to Ask: Neural Question Generation for Reading Comprehension
TLDR
An attention-based sequence learning model for the task and the effect of encoding sentence- vs. paragraph-level information is investigated and results show that the system significantly outperforms the state-of-the-art rule-based system. Expand
Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus
TLDR
The 30M Factoid Question-Answer Corpus is presented, an enormous question answer pair corpus produced by applying a novel neural network architecture on the knowledge base Freebase to transduce facts into natural language questions. Expand
Sequence Level Training with Recurrent Neural Networks
TLDR
This work proposes a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE, and outperforms several strong baselines for greedy generation. Expand
Modeling Coverage for Neural Machine Translation
TLDR
This paper proposes coverage-based NMT, which maintains a coverage vector to keep track of the attention history and improves both translation quality and alignment quality over standard attention- based NMT. Expand
Incorporating Copying Mechanism in Sequence-to-Sequence Learning
TLDR
This paper incorporates copying into neural network-based Seq2Seq learning and proposes a new model called CopyNet with encoder-decoder structure which can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose sub-sequences in the input sequence and put them at proper places in the output sequence. Expand
Neural Machine Translation by Jointly Learning to Align and Translate
TLDR
It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. Expand
Automatic factual question generation from text
TLDR
This research supports the idea that natural language processing can help teachers efficiently create instructional content by automating the creation of specific type of assessment item and provides solutions to some of the major challenges in question generation. Expand
Automatic English Question Generation System Based on Template Driven Scheme
TLDR
A proposed system designed, implemented and tested to automate question generation process that uses pure syntactic pattern-matching approach to generate content-related questions in order to improve the independent study of any textual material. Expand
SQuAD: 100,000+ Questions for Machine Comprehension of Text
TLDR
A strong logistic regression model is built, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). Expand
...
1
2
3
...