Comparing Human and Automated Essay Scoring for Prospective Graduate Students With Learning Disabilities and/or ADHD

Automated essay scoring is a developing technology that can provide efficient scoring of large numbers of written responses. Its use in higher education admissions testing provides an opportunity to collect validity and fairness evidence to support current uses and inform its emergence in other area...

Full description

Saved in:
Bibliographic Details
Published in:Applied measurement in education Vol. 29; no. 3; pp. 161 - 172
Main Authors: Buzick, Heather, Oliveri, Maria Elena, Attali, Yigal, Flor, Michael
Format: Journal Article
Language:English
Published: Philadelphia Routledge 02-07-2016
Taylor & Francis Ltd
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Automated essay scoring is a developing technology that can provide efficient scoring of large numbers of written responses. Its use in higher education admissions testing provides an opportunity to collect validity and fairness evidence to support current uses and inform its emergence in other areas such as K-12 large-scale assessment. In this study, human and automated scores on essays written by college students with and without learning disabilities and/or attention deficit hyperactivity disorder were compared, using a nationwide (U.S.) sample of prospective graduate students taking the revised Graduate Record Examination. The findings are that, on average, human raters and the automated scoring engine assigned similar essay scores for all groups, despite average differences among groups with respect to essay length and spelling errors.
ISSN:0895-7347
1532-4818
DOI:10.1080/08957347.2016.1171765