Unified Question Generation with Continual Lifelong Learning

@article{Yuan2022UnifiedQG,
  title={Unified Question Generation with Continual Lifelong Learning},
  author={Wei Yuan and Hongzhi Yin and Tieke He and Tong Chen and Qiufeng Wang and Li-zhen Cui},
  journal={Proceedings of the ACM Web Conference 2022},
  year={2022}
}
Question Generation (QG), as a challenging Natural Language Processing task, aims at generating questions based on given answers and context. Existing QG methods mainly focus on building or training models for specific QG datasets. These works are subject to two major limitations: (1) They are dedicated to specific QG formats (e.g., answer-extraction or multi-choice QG), therefore, if we want to address a new format of QG, a re-design of the QG model is required. (2) Optimal performance is only… 

Figures and Tables from this paper

CIRCLE: Continual Repair across Programming Languages
TLDR
The experimental results demonstrate that CIRCLE not only effectively and efficiently repairs multiple programming languages in continual learning settings, but also achieves state-of-the-art performance (e.g., fixes 64 Defects4J bugs) with a single repair model.

References

SHOWING 1-10 OF 90 REFERENCES
Improving Neural Question Generation using Deep Linguistic Representation
TLDR
The experimental results demonstrate that the proposed approach outperforms the state-of-the-art QG systems, and significantly improves the baseline by 17.2% and 6.
Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering
TLDR
This paper proposes two semantics-enhanced rewards obtained from downstream question paraphrasing and question answering tasks to regularize the QG model to generate semantically valid questions, and proposes a QA-based evaluation method which measures the model’s ability to mimic human annotators in generating QA training data.
Joint Learning of Question Answering and Question Generation
TLDR
Two training algorithms for learning better QA and QG models through leveraging one another are presented and it is found that the performance of a QG model could be easily improved by aQA model via policy gradient, however, directly applying GAN that regards all the generated questions as negative instances could not improve the accuracy of the QA model.
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
TLDR
This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.
Accelerating Real-Time Question Answering via Question Generation
TLDR
Ocean-Q introduces a QG model to generate a large pool of question-answer (QA) pairs offline, then in real time matches an input question with the candidate QA pool to predict the answer without question encoding, and proposes a new data augmentation method to improve QG quality.
Few-shot Natural Language Generation for Task-Oriented Dialog
TLDR
FewshotWOZ is presented, the first NLG benchmark to simulate the few-shot learning setting in task-oriented dialog systems, and the proposed SC-GPT model significantly outperforms existing methods, measured by various automatic metrics and human evaluations.
Unified Language Model Pre-training for Natural Language Understanding and Generation
TLDR
A new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks that compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks.
Paragraph-level Neural Question Generation with Maxout Pointer and Gated Self-attention Networks
TLDR
A maxout pointer mechanism with gated self-attention encoder to address the challenges of processing long text inputs for question generation, which outperforms previous approaches with either sentence-level or paragraph-level inputs.
Question Generation for Question Answering
TLDR
Experimental results show that, by using generated questions as an extra signal, significant QA improvement can be achieved.
Question-type Driven Question Generation
TLDR
The question type is fused into a seq2seq model to guide the question generation, so as to deal with the mismatching problem and significant improvement on the accuracy of question type prediction is achieved.
...
...