The aim of the Spring Meeting was to discuss some ideas brought up in the presentations at the meeting in October. We discussed some core aspects of the debate around the use of marking criteria, grading scales and rubrics. We started by reading together through a list of core points and then commented on these and shared ideas at each table. These are the points we discussed:
Rubrics increase the transparency and reliability of assessment.
Rubrics foster autonomy and self-regulation in students.
“(In order) to share standards in higher education, there has been an overemphasis on detailing criteria and levels. Using explicit criteria cannot capture all the different aspects of quality”.
“Limits to the extent that standards can be articulated explicitly must be recognised” (HEA, 2012. A Marked Improvement. Transforming assessment in HE).
Grading complex performance requires professional judgements more than measurement. Grading decisions are holistic (Yorke, 2011).
Assessors work backwards: a) holistic judgement, b) awarding of marks to criteria, c) justified grade decision (Bloxham, Boyd & Orr, 2011; Brooks, 2012).
Criteria and standards can be better communicated by discussing exemplars (e.g., annotated exemplars can illustrate what a ‘wide and precise’ vocabulary means compared to ‘simple vocabulary’).
Comparative judgement (‘which essay is better’?) plays to human strengths by requiring markers to compare two things, rather than make an absolute judgement.
Comparative judgement reduces teacher workload and can provide high levels of reliability for large sets of essays (e.g., across campuses – UoR and UoRM – or even across universities).
Comparative judgement can also promote teacher development. When using comparative judgement, teachers judge a wide range of student work.