Question Generation for Adaptive Education

  title={Question Generation for Adaptive Education},
  author={Megha Srivastava and Noah D. Goodman},
Intelligent and adaptive online education systems aim to make high-quality education available for a diverse range of students. However, existing systems usually depend on a pool of hand-made questions, limiting how fine-grained and open-ended they can be in adapting to individual students. We explore targeted question generation as a controllable sequence generation task. We first show how to fine-tune pre-trained language models for deep knowledge tracing (LM-KT). This model accurately… 

Figures and Tables from this paper

Few-shot Question Generation for Personalized Feedback in Intelligent Tutoring Systems

This work explores automatically generated questions as personalized feedback in an ITS that can pinpoint correct and incorrect or missing phrases in student answers as well as guide them towards correct answer by asking a question in natural language.

GRAM: Fast Fine-tuning of Pre-trained Language Models for Content-based Collaborative Filtering

This work proposes GRAM (GRadient Accumulation for Multi-modality in CCF), which exploits the fact that a given item often appears multiple times within a batch of interaction histories, and significantly improves training efficiency.



Combining adaptivity with progression ordering for intelligent tutoring systems

A new approach for automatically and adaptively sequencing practice activities for a particular learner and its application for foreign language learning is proposed and results suggest that such an approach may be significantly better than an expert system when there is high variability in the rate of learning among the students.

Deep Knowledge Tracing

The utility of using Recurrent Neural Networks to model student learning and the learned model can be used for intelligent curriculum design and allows straightforward interpretation and discovery of structure in student tasks are explored.

Machine Learning–Driven Language Assessment

The approach is the first to use machine learning and natural language processing to induce proficiency scales based on a given standard, and then use linguistic models to estimate item difficulty directly for computer-adaptive testing.


Evidence from a series of studies comparing conventional and adaptive testing procedures is presented showing that the adaptive procedure results in more accurate mastery classifications than do conventional mastery tests, while using fewer test questions.

The Impact on Individualizing Student Models on Necessary Practice Opportunities

This work examines if the difference in the expected number of practice opportunities required if mastery is assessed using an individual student’s own estimated model parameters, compared to the population model, is important for instructional decisions.

Teaching Multiple Concepts to Forgetful Learners

This paper casts the problem of adaptively teaching a forgetful learner as a novel discrete optimization problem, and proposes a simple greedy teaching strategy that derives strong performance guarantees based on two intuitive data-dependent parameters.

Optimizing the Learning of a Second-Language Vocabulary.

Four optimization strategies are proposed and evaluated experimentally to optimize the learning of a large German-English vocabulary; these strategies are computer controlled and take account of S's response history in making decisions about which items to present next.

Language Models are Unsupervised Multitask Learners

It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.

Knowledge tracing: Modeling the acquisition of procedural knowledge

An effort to model students' changing knowledge state during skill acquisition and a series of studies is reviewed that examine the empirical validity of knowledge tracing and has led to modifications in the process.

CTRL: A Conditional Transformer Language Model for Controllable Generation

CTRL is released, a 1.63 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior, providing more explicit control over text generation.