On the Automatic Assessment of Computational Thinking Skills: A Comparison with Human Experts

Abstract

Programming and computational thinking skills are promoted in schools worldwide. However, there is still a lack of tools that assist learners and educators in the assessment of these skills. We have implemented an assessment tool, called Dr. Scratch, that analyzes Scratch projects with the aim to assess the level of development of several aspects of computational thinking. One of the issues to address in order to show its validity is to compare the (automatic) evaluations provided by the tool with the (manual) evaluations by (human) experts. In this paper we compare the assessments provided by Dr. Scratch with over 450 evaluations of Scratch projects given by 16 experts in computer science education. Our results show strong correlations between automatic and manual evaluations. As there is an ample debate among educators on the use of this type of tools, we discuss the implications and limitations, and provide recommendations for further research.

DOI: 10.1145/3027063.3053216

3 Figures and Tables

Cite this paper

@inproceedings{MorenoLen2017OnTA, title={On the Automatic Assessment of Computational Thinking Skills: A Comparison with Human Experts}, author={Jes{\'u}s Moreno-Le{\'o}n and Marcos Rom{\'a}n-Gonz{\'a}lez and Casper Harteveld and Gregorio Robles}, booktitle={CHI Extended Abstracts}, year={2017} }