Automated generation of assessment tests from domain ontologies

@article{VinuE2017AutomatedGO,
  title={Automated generation of assessment tests from domain ontologies},
  author={V VinuE. and P. Sreenivasa Kumar},
  journal={Semantic Web},
  year={2017},
  volume={8},
  pages={1023-1047}
}
We investigate the effectiveness of OWL-DL ontologies in generating multiple choice questions (MCQs) that can be employed for conducting large scale assessments. The details of a prototype system called Automatic Test Generation (ATG) system and its extended version called Extended-ATG system are elaborated in this paper. The ATG system was useful in generating multiple choice question-sets of a required cardinality, from a given formal ontology. This system is further enhanced to include… 

Tables from this paper

Ontology-Based Generation of Medical, Multi-term MCQs

  • J. LeoG. Kurdi W. Dowling
  • Computer Science
    International Journal of Artificial Intelligence in Education
  • 2019
TLDR
This paper presents a novel ontology-based approach that exploits classes and existential restrictions to generate case-based questions that are suitable for scenarios beyond mere knowledge recall and generates more than 3 million questions for four physician specialities.

Generating Answerable Questions from Ontologies for Educational Exercises

TLDR
Initial results showed it is possible to create ontology-based questions, and three approaches were designed: template variables using foundational ontology categories, using main classes from the domain ontology, and sentences mostly driven by natural language generation techniques.

Difficulty-level Modeling of Ontology-based Factual Questions

TLDR
A detailed study on the features (factors) of a question statement which could possibly determine its difficulty level for three learner categories (experts, intermediates and beginners) and formulate ontology based metrics for the same.

Evaluating the quality of the ontology-based auto-generated questions

TLDR
An experiment assesses the auto-generated questions’ difficulty, discrimination, and reliability using two statistical methods: Classical Test Theory and Item Response Theory, and studies the effect of the ontology-based generation strategies and the level of questions in Bloom’s taxonomy on the quality of the questions.

A comparative study of methods for a priori prediction of MCQ difficulty

TLDR
Two ontology-based measures for difficulty prediction of multiple choice questions, as well as comparing each measure with expert prediction against the exam performance of 12 residents over a corpus of 231 medical case-based questions that are in multiple choice format, are analyzed.

An Ontology-Driven Learning Assessment Using the Script Concordance Test

TLDR
The proposed automatic question generation was evaluated against the traditional manually created SCT, and the results showed that the time required for tests creation significantly reduced, which confirms significant scalability improvements with respect to traditional approaches.

Using a Common Sense Knowledge Base to Auto Generate Multi-Dimensional Vocabulary Assessments

TLDR
This paper addresses the problem of Multiple Choice Question (MCQ) generation for vocabulary learning assessments, specially catered to young learners, and evaluates the efficacy of the approach by asking human annotators to annotate the questions generated by the system based on relevance.

A Survey of Semantic Technology and Ontology for e-Learning

  • Computer Science
  • 2019
TLDR
This study systematically reviewed research on STO in e- learning systems from 2008 to 2018 and analyzed six types of ontology use and five aspects of educational ontology, as well as e-learning systems that use semantic approaches.

A Survey of Ontologies and Their Applications in e-Learning Environments

TLDR
This survey systematically reviewed research on ontology for e-learning from 2008 to 2020 and classified current educational ontologies into 6 types and analyzed them by 5 measures: design methodology, building routine, scale of ontology, level of semantic richness, and ontology evaluation.

A Systematic Review of Automatic Question Generation for Educational Purposes

TLDR
There is little focus in the current literature on generating questions of controlled difficulty, enriching question forms and structures, automating template construction, improving presentation, and generating feedback, and the need to further improve experimental reporting, harmonise evaluation metrics, and investigate other evaluation methods that are more feasible.

References

SHOWING 1-10 OF 52 REFERENCES

Ontology-Based Multiple Choice Question Generation

  • M. Al-Yahya
  • Computer Science
    TheScientificWorldJournal
  • 2014
TLDR
For the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework.

Improving Large-Scale Assessment Tests by Ontology Based Approach

TLDR
This work proposes three heuristic techniques to choose a desired number of significant MCQs that cover the required knowledge boundaries, from a given ontology, and shows that the question-sets generated based on this approach compare satisfactorily to the ones prepared by domain experts, in terms of precision and recall.

A novel approach to generate MCQs from domain ontology

Automatic Question Pattern Generation for Ontology-based Question Answering

TLDR
An automatic question pattern generation method for ontology-based question answering with the use of textual entailment for predictive questions, which are predicted to be asked by users in a domain on the basis of a domain ontology.

Towards automatic generation of e-assessment using semantic web technologies

TLDR
This paper discusses and extends an innovative approach to automated generation of computer-assisted assessment (CAA) from semantic web-based domain ontologies, adding new ontology elements (annotations) to the meta-ontology used for generating questions and adding a semantic interpretation of the mapping between the ‘domain' ontology and the target ‘question’ ontology.

Towards Competency Question-Driven Ontology Authoring

TLDR
This paper first analyzes real-world competency questions collected from two different domains, and employs the linguistic notion of presupposition to describe the ontology requirements implied by Competency questions, and shows that these requirements can be tested automatically.

OntoQue: A Question Generation Engine for Educational Assesment Based on Domain Ontologies

  • M. Al-Yahya
  • Computer Science
    2011 IEEE 11th International Conference on Advanced Learning Technologies
  • 2011
TLDR
The Onto Que system is described, an engine for objective assessment item generation based on domain ontologies which uses knowledge inherent in the ontology about domain entities such as classes, properties, and individuals to generate semantically correct assessment items.

Automated Transformation of SWRL Rules into Multiple-Choice Questions

TLDR
A system and a set of strategies that can be used in order to automatically generate multiple choice questions from SWRL rules are presented to support further research in the area and to be a testbed for the development of more advanced assessment techniques.

Generating Mathematical Word Problems

  • Sandra Williams
  • Computer Science
    AAAI Fall Symposium: Question Generation
  • 2011
TLDR
A prototype system that generates mathematical word problems from ontologies in unrestricted domains that builds on an existing ontology verbaliser that renders logical statements written in Web Ontology Language as English sentences is described.
...