Exploring programming assessment instruments: a classification scheme for examination questions

@article{Sheard2011ExploringPA,
  title={Exploring programming assessment instruments: a classification scheme for examination questions},
  author={Judithe Sheard and Simon and Angela Carbone and Donald D. Chinn and Mikko-Jussi Laakso and Tony Clear and Michael de Raadt and Daryl J. D'Souza and James Harland and Raymond Lister and Anne Philpott and Geoff Warburton},
  journal={Proceedings of the seventh international workshop on Computing education research},
  year={2011}
}
  • Judithe SheardSimon G. Warburton
  • Published 8 August 2011
  • Education
  • Proceedings of the seventh international workshop on Computing education research
This paper describes the development of a classification scheme that can be used to investigate the characteristics of introductory programming examinations. We describe the process of developing the scheme, explain its categories, and present a taste of the results of a pilot analysis of a set of CS1 exam papers. This study is part of a project that aims to investigate the nature and composition of formal examination instruments used in the summative assessment of introductory programming… 

Figures and Tables from this paper

Introductory programming: examining the exams

It is found that introductory programming examinations vary greatly in the coverage of topics, question styles, skill required to answer questions and the level of difficulty of questions.

A comparative analysis of results on programming exams

The performance of students in two programming subjects is examined, as a means of determining how to measure the difficulty of a particular question.

Can computing academics assess the difficulty of programming examination questions?

The conclusion is that computing academics do have a fairly good idea of the difficulty of programming exam questions, even for a course that they did not teach, and some areas where the relationships show weaknesses.

How difficult are exams?: a framework for assessing the complexity of introductory programming exams

A study investigating the difficulty of exam questions using a subjective assessment of difficulty and a purpose-built exam question complexity classification scheme finds substantial variation across the exams for all measures.

The Compound Nature of Novice Programming Assessments

Examination questions used to assess novice programming at the syntax level are analyzed and the extent to which each syntax component is used across the various examination questions are described.

Automatic question classification models for computer programming examination: A systematic literature review

This study aims to analyze the ongoing question classification models with reference to the set of formulated research questions and finds the necessity to develop advanced hybrid feature selection methods in order to enhance the classification performance.

Stepping up to integrative questions on CS1 exams

It is argued that concept questions provide more accurate formative feedback and simplify marking by reducing the number of variants that must be considered, and inexperienced students have the most to gain from the use of concept questions.

Paper Or IDE?: The Impact of Exam Format on Student Performance in a CS1 Course

A year-long study to investigate student performance across two test formats, paper and IDE, with the goal of identifying any differences attributable to the format.

Measuring Student Competency in University Introductory Computer Programming: Epistemological and Methodological Foundations

The first phase in the development of an instrument to measure CS1 student competency was concerned with the garnering of content aspect evidence and the qualitative procedures applied to deal with the literature, previous research, and existing instruments.

Assessing the assessment — Insights into CS1 exams

  • E. ZurT. Vilner
  • Education
    2014 IEEE Frontiers in Education Conference (FIE) Proceedings
  • 2014
The objective was to attempt to identify the difficulties the students faced and to provide us with guidelines for writing final exams in CSI which reflect better the material covered, becoming a fairer assessment instrument and decreasing the failure rate.
...

References

SHOWING 1-10 OF 31 REFERENCES

Developing a validated assessment of fundamental CS1 concepts

A method for creating a language independent CS1 assessment instrument is proposed and the results of the analysis used to define the common conceptual content that will serve as the framework for the exam are presented.

Classifying computing education papers: process and results

Analysis of the ICER papers confirms that ICER is a research-intensive conference, and indicates that the research is quite narrowly focused, with the majority of the papers set in the context of programming courses.

Reviewing CS1 exam question content

An evaluation of the content and cognitive requirements of individual questions suggests that in order to succeed, students must internalize a large amount of CS1 content.

Instructor perspectives of multiple-choice questions in summative assessment for novice programmers

The findings highlight that most of the instructors believed that summative assessment is, and is meant to be, a valid measure of a student's ability to program, and that Multiple-choice Questions provide a means of testing a low level of understanding.

Assessing fundamental introductory computing concept knowledge in a language independent manner

The Foundational CS1 (FCS1) Assessment instrument is developed, the first assessment instrument for introductory computer science concepts that is applicable across a variety of current pedagogies and programming languages and demonstrates that novice computing students, at an appropriate level of development, can transfer their understanding of fundamental concepts to pseudo-code notation.

Cross-institutional Comparison of Mechanics Examinations: A Guide for the Curious

This process is an example of a simple, easy to implement, and readily transportable approach to cross-institutional peer review of assessments, and an effective way of enhancing collaborative links between engineering educators.

Going SOLO to assess novice programmers

This paper explores the programming knowledge of novices using Biggs' SOLO taxonomy. It builds on previous work of Lister et al. (2006) and addresses some of the criticisms of that work. The research

Reliably classifying novice programmer exam responses using the SOLO taxonomy

The paper derives an augmented set of SOLO categories for application to the programming domain, and proposes a set of guidelines for researchers to use.

Relationships between reading, tracing and writing skills in introductory programming

The performance of students on code tracing tasks correlated with their performance on code writing tasks, and a correlation was found between performance on "explain in plain English" tasks and code writing.

What do teachers teach in introductory programming?

A general, worldwide picture of teachers' opinion about what should be taught in introductory programming courses is created and it is found that teaching is not only a matter of topics, but also a Matter of perspective on teaching the topics.