Automated Measurement of Competencies and Generation of Feedback in Object-Oriented Programming Courses

To overcome the shortage of computer specialists, there is an increased need for correspondent study and training offers, in particular for learning programming. The automated assessment of solutions to programming tasks could relieve teachers of time-consuming corrections and provide individual fee...

Full description

Saved in:
Bibliographic Details
Published in:2020 IEEE Global Engineering Education Conference (EDUCON) pp. 329 - 338
Main Authors: Krugel, Johannes, Hubwieser, Peter, Goedicke, Michael, Striewe, Michael, Talbot, Mike, Olbricht, Christoph, Schypula, Melanie, Zettler, Simon
Format: Conference Proceeding
Language:English
Published: IEEE 01-04-2020
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:To overcome the shortage of computer specialists, there is an increased need for correspondent study and training offers, in particular for learning programming. The automated assessment of solutions to programming tasks could relieve teachers of time-consuming corrections and provide individual feedback even in online courses without any personal teacher. The e-assessment system JACK has been successfully applied for more than 12 years up to now, e.g., in a CS1 lecture. However, there are only few solid research results on competencies and competence models for object-oriented programming (OOP), which could be used as a foundation for high-quality feedback.In a joint research project of research groups at two universities, we aim to empirically define competencies for OOP using a mixed-methods approach. In a first step, we performed a qualitative content analysis of source code (sample solutions and students' solutions) and as a result identified a set of suitable competency components that forms the core of further investigations. Semi-structured interviews with learners will be used to identify difficulties and misconceptions of the learners and to adapt the set of competency components. Based on that we will use Item Response Theory (IRT) to develop an automatically evaluable test instrument for the implementation of abstract data types. We will further develop empirically founded and competency-based feedback that can be used in e-assessment systems and MOOCs.
ISSN:2165-9567
DOI:10.1109/EDUCON45650.2020.9125323