Let’s Ask Students About Their Programs, Automatically

@article{Lehtinen2021LetsAS,
  title={Let’s Ask Students About Their Programs, Automatically},
  author={Teemu Lehtinen and Andr{\'e} L. M. Santos and Juha Sorva},
  journal={2021 IEEE/ACM 29th International Conference on Program Comprehension (ICPC)},
  year={2021},
  pages={467-475}
}
Students sometimes produce code that works but that its author does not comprehend. For example, a student may apply a poorly-understood code template, stumble upon a working solution through trial and error, or plagiarize. Similarly, passing an automated functional assessment does not guarantee that the student understands their code. One way to tackle these issues is to probe students’ comprehension by asking them questions about their own programs. We propose an approach to automatically… 

Figures and Tables from this paper

Students Struggle to Explain Their Own Program Code
TLDR
The results indicate that answering properly aligned QLCs correctly has stronger correlation with student success and retention than merely submitting a correct program.
Jask: Generation of Questions About Learners' Code in Java
TLDR
Jask is a system capable of generating questions about a learner's code written in Java that is integrated in a web-based system where students submit their code, answer questions about it, and obtain immediate formative feedback with the correct answers.
Materials ExerciseTeacher Student Attempt Feedback
  • 2022
Automatic Generation of Programming Exercises and Code Explanations using Large Language Models
TLDR
The analysis suggests that there is significant value in massive generative machine learning models as a tool for instructors, although there remains a need for some oversight to ensure the quality of the generated content before it is delivered to students.
Automated Assessment in Computer Science Education: A State-of-the-Art Review
TLDR
This work surveys the state-of-the-art in the automated assessment of CS assignments, focusing on the supported types of exercises, security measures adopted, testing techniques used, type of feedback produced, and the information they offer the teacher to understand and optimize learning.
What does this Python code do? An exploratory analysis of novice students’ code explanations
Motivation. Code reading skills are important for comprehension. Explain-in-plain-English tasks (EiPE) are one type of reading exercises that show promising results on the ability of such exercises

References

SHOWING 1-10 OF 49 REFERENCES
Students Struggle to Explain Their Own Program Code
TLDR
The results indicate that answering properly aligned QLCs correctly has stronger correlation with student success and retention than merely submitting a correct program.
Exploring programming misconceptions: an analysis of student mistakes in visual program simulation exercises
TLDR
This study identifies the most common mistakes that students make in VPS and lends tentative support to the claim that many VPS mistakes are linked to programming misconceptions and VPS logs can be a useful data source for studying students' understandings of CS1 content.
Autograding "Explain in Plain English" questions using NLP
TLDR
This work presents what it believes to be the first automatic grader for EiPE questions and its deployment in a large-enrollment introductory programming course and finds that its implementation has an accuracy of 87-89%, which is similar in performance to course teaching assistants trained to perform this task and compares favorably to automatic short answer grading algorithms developed for other domains.
On the Use of Semantic-Based AIG to Automatically Generate Programming Exercises
TLDR
A semantic-based AIG is presented that uses linked open data (LOD) and automatically generates contextual programming exercises and was incorporated into an existing self-assessment and practice tool for students learning computer programming.
If They Build It, Will They Understand It? Exploring the Relationship between Student Code and Performance
TLDR
It is found that for students who had code in their projects, student performance on specific questions on the written assessments is only very weakly correlated to specific attributes of final projects typically used in artifact analysis as well as attributes the authors use to define candidate code.
Stochastic Tree-Based Generation of Program-Tracing Practice Questions
TLDR
A language-generalizable approach for automatically generating a practically unlimited number of mental program-execution exercises, each constructed to a designated level of difficulty and incorporating the core programming-in-the-small themes: assignment, conditionals, loops, and arrays is proposed.
Fostering Program Comprehension in Novice Programmers - Learning Activities and Learning Trajectories
TLDR
This working group asserts that Program Comprehension (ProgComp) plays a critical part in the process of writing programs and identified two main goals: to collect and define learning activities that explicitly address key components of program comprehension and to define tentative theoretical learning trajectories that will guide teachers as they select and sequence those learning activities in their CS0/CS1/CS2 or K-12 courses.
Trace-based teaching in early programming courses
TLDR
It is shown that accurately modeling what is occurring in memory and requiring students to trace code using this model improves student performance and increases retention, and trace-based teaching led to statistically significant improvements student grades, decreased drop and failure rates, and an improvement in students' programming abilities.
Novice Rationales for Sketching and Tracing, and How They Try to Avoid It
TLDR
This study interviewed 13 CS1 students retrospectively about their decisions to sketch and draw on a recent programming exam, finding that when students do sketch, it is found that their sketching choices do not always align with a strict execution of the notional machine.
A Systematic Literature Review of Automated Feedback Generation for Programming Exercises
TLDR
It is found that feedback mostly focuses on identifying mistakes and less on fixing problems and taking a next step, and teachers cannot easily adapt tools to their own needs.
...
...