Cookies

We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.


Durham Research Online
You are in:

Addressing bias to improve reliability in peer review of programming coursework.

Bradley, Steven (2019) 'Addressing bias to improve reliability in peer review of programming coursework.', in Koli Calling '19 : proceedings of the 19th Koli Calling International Conference on Computing Education Research. New York: ACM, p. 19.

Abstract

Peer review has many potential pedagogical benefits, particularly in the area of programming, where it is a part of everyday professional practice. Although sometimes used for formative assessment, it is less commonly used for summative assessment, partly because of a perceived difficulty with reliability. We explore the use of a hierarchical Bayesian model to account for varying bias and precision amongst student assessors. We show that the model is sound and produces benefits in assessment reliability in real assessments. Such analyses have been used in essay subjects before but not, to our knowledge, within programming.

Item Type:Book chapter
Full text:(AM) Accepted Manuscript
Download PDF
(1073Kb)
Status:Peer-reviewed
Publisher Web site:https://doi.org/10.1145/3364510.3364523
Publisher statement:© 2019 Copyright held by the owner/author(s). This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Koli Calling '19 : Proceedings of the 19th Koli Calling International Conference on Computing Education Research, https://doi.org/10.1145/3364510.3364523
Date accepted:11 September 2019
Date deposited:01 November 2019
Date of first online publication:21 November 2019
Date first made open access:28 January 2020

Save or Share this output

Export:
Export
Look up in GoogleScholar