Do student programmers all tend to write the same software tests?
@inproceedings{Edwards2014DoSP, title={Do student programmers all tend to write the same software tests?}, author={Stephen H. Edwards and Zalia Shams}, booktitle={ITiCSE '14}, year={2014} }
While many educators have added software testing practices to their programming assignments, assessing the effectiveness of student-written tests using statement coverage or branch coverage has limitations. While researchers have begun investigating alternative approaches to assessing student-written tests, this paper reports on an investigation of the quality of student written tests in terms of the number of authentic, human-written defects those tests can detect. An experiment was conducted…
32 Citations
Reconsidering Automated Feedback: A Test-Driven Approach
- Computer ScienceSIGCSE
- 2015
A framework for identifying whether a student has adequately tested a specific feature of their code that is failing an instructor's tests is explained and analyzed, finding that an automated grading system's feedback for programming assignments often provided hints that may discourage reflective testing.
Measuring Unit Test Accuracy
- Computer ScienceSIGCSE
- 2019
This paper introduces test accuracy as a measurement that compares how well unit tests perform at distinguishing acceptable from unacceptable function implementations and compares each of their relationships with a lack of bugs in the students' implementations.
Improving Students’ Testing Practices
- Education, Computer Science2020 IEEE/ACM 42nd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)
- 2020
This research proposes to quantitatively measure the quality of student-written test code, and qualitatively identify the common mistakes and bad smells observed in student- written test code.
Using Spectrum-Based Fault Location and Heatmaps to Express Debugging Suggestions to Student Programmers
- Computer ScienceACE '17
- 2017
A technique to present the student's code with suggestions to the student of where to investigate, based on the results of the automatic fault location, taken from the GZoltar statistical fault localization library is presented.
A Test-Driven Approach to Improving Student Contributions to Open-Source Projects
- Education2019 IEEE Frontiers in Education Conference (FIE)
- 2019
It was found that students in the TDD group were able to apply test-driven techniques pragmatically-spending more than 20% of their time on average complying with the test- driven process-throughout the whole project.
How do Students Test Software Units?
- Education2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET)
- 2021
In insight into ideas and beliefs on testing of students who finished an introductory course on programming without any formal education on testing, the main outcome is that students do not test systematically, while most of them think they do test systematically.
Applying spectrum-based fault localization to generate debugging suggestions for student programmers
- Computer Science2015 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)
- 2015
This study used the GZoltar statistical fault localization library for Java to analyze 135 CS2-level student programs, and manually debugged the programs to find the locations of their faults, which produced a feasible strategy for providing accurate, automated suggestions to students for "where to look" in order to fix their own programs.
Challenges to integrate software testing into introductory programming courses
- Computer Science2017 IEEE Frontiers in Education Conference (FIE)
- 2017
The main contribution of this paper refers to the establishment of a catalog of challenges faced to integrate software testing into introductory programming courses, and points out challenges that have been scarcely addressed in the literature.
Mutation Testing and Self/Peer Assessment: Analyzing their Effect on Students in a Software Testing Course
- Computer Science2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET)
- 2021
An experience integrating both mutation testing and self/peer assessment into a software testing course during three years revealed that the students' test suites with more test cases did not always achieve the highest scores, that they found more readable their own tests, and that they tended to cover the basic operations while forgetting about more advanced features.
References
SHOWING 1-10 OF 13 REFERENCES
Toward practical mutation analysis for evaluating the quality of student-written software tests
- EducationICER
- 2013
This paper describes a new approach to mutation analysis of student-written tests that is more practical for educational use, especially in an automated grading context and combines several techniques to produce a novel solution that addresses the shortcomings raised by more traditional mutation analysis.
Improving student performance by evaluating how well students test their own programs
- Computer ScienceJERC
- 2003
An approach to teaching software testing in a way that will encourage students to practice testing skills in many classes and give them concrete feedback on their testing performance, without requiring a new course, any new faculty resources, or a significant number of lecture hours in each course where testing is practiced.
Running students' software tests against each others' code: new life for an old "gimmick"
- Education, Computer ScienceSIGCSE '12
- 2012
A novel solution for Java is presented that uses bytecode rewriting to transform a student's tests into a form that uses reflection to run against any other solution, regardless of any compile-time dependencies that may have been present in the original tests.
Helping students appreciate test-driven development (TDD)
- EducationOOPSLA '06
- 2006
The initial experiences teaching students to write test cases and evaluating student-written test suites are reported, with an emphasis on the observation that, without proper incentive towrite test cases early, many students will complete the programming assignment first and then add the build of their test cases afterwards.
Mutation analysis vs. code coverage in automated assessment of students' testing skills
- Computer ScienceSPLASH/OOPSLA Companion
- 2010
Initial results from applying mutation analysis to real course submissions indicate that mutation analysis could be used to fix some problems of code coverage in the assessment.
A gimmick to integrate software testing throughout the curriculum
- Computer ScienceSIGCSE '02
- 2002
Experiences in which students of a programming course were asked to submit both an implementation as well as a test set are discussed, which introduces implicit principles of software testing together with a bit of fun competition.
Experiences using test-driven development with an automated grader
- Computer Science
- 2007
Experiences in using software testing in CS1- and CS2-level courses over the past three years are summarized, focusing on student perceptions of automated grading tools and how they might be addressed, approaches to designing project specifications, and strategies for providing meaningful feedback to students.
Using software testing to move students from trial-and-error to reflection-in-action
- EducationSIGCSE '04
- 2004
Introductory computer science students rely on a trial and error approach to fixing errors and debugging for too long. Moving to a reflection in action strategy can help students become more…
Grading student programs using ASSYST
- Education, Computer ScienceSIGCSE '97
- 1997
ASSYST offers a graphical interface that can be used to direct all aspects of the grading process, and it considers a wide range of criteria in its automatic assessment.
Software testing in the computer science curriculum -- a holistic approach
- EducationACSE '00
- 2000
A unifying framework is presented which identifies a minimal set of test experiences, skills and concepts students should accumulate and the holistic approach combines common test experiences in core courses, an elective course in software testing, and volunteer participation in a test laboratory.