Spring Meeting 2018

The aim of the Spring Meeting was to discuss some ideas brought up in the presentations at the meeting in October. We discussed some core aspects of the debate around the use of marking criteria, grading scales and rubrics. We started by reading together through a list of core points and then commented on these and shared ideas at each table. These are the points we discussed:

  • Rubrics increase the transparency and reliability of assessment
  • Rubrics foster autonomy and self-regulation in students
  • “(In order) to share standards in higher education, there has been an overemphasis on detailing criteria and levels. Using explicit criteria cannot capture all the different aspects of quality”
  • “Limits to the extent that standards can be articulated explicitly must be recognised” (HEA, 2012. A Marked Improvement. Transforming assessment in HE)
  • Grading complex performance requires professional judgements more than measurement. Grading decisions are holistic (Yorke, 2011).
  • Assessors work backwards: a) holistic judgement, b) awarding of marks to criteria, c) justified grade decision (Bloxham, Boyd & Orr, 2011; Brooks, 2012)
  • Criteria and standards can be better communicated by discussing exemplars (e.g., annotated exemplars can illustrate what a ‘wide and precise’ vocabulary means compared to ‘simple vocabulary’)
  • Comparative judgement (‘which essay is better’?) plays to human strengths by requiring markers to compare two things, rather than make an absolute judgement
  • Comparative judgement reduces teacher workload and can provide high levels of reliability for large sets of essays (e.g., across campuses – UoR and UoRM – or even across universities)
  • Comparative judgement can also promote teacher development. When using comparative judgement, teachers judge a wide range of student work

Autumn Meeting 2017

In 2017-2018, we widened participation by inviting colleagues with an interest in foreign language pedagogy to join us in termly meetings. The purpose of the first meeting, held on 13th November 2017, was to spark a discussion on a topic that is currently highly controversial: marking criteria, rubrics and grading scales used to assess speaking and writing in a foreign language, which is the theme we chose for this year. Three short presentations served as a springboard for some initial discussion, which was carried over to the Spring Term Meeting. Below are brief summaries of the presentations and links to  the slides:

What can I say? An overview of the new GCSE speaking assessment, Sarah Marston, IoE
An insight into how GCSE pupils are graded against the new 1-9 grade criteria in the speaking component of the GCSE.  An explanation of how their spoken language skills are tested, which skills, grammar and vocabulary are they expected to know, which topics are now included and how does the exam link to the National Curriculum.  The presentation also outlines how the current expectations differ to what our current MFL under-graduates experienced and what we should now expect the MFL undergraduates of 2020 to be able to do.  Draft examples of exam specifications and mark schemes are discussed.

What use are band descriptors?, Rob Playfair, IFP, ISLI
Focussing on descriptors for writing assessment, I discuss my experience in both Singapore and on the PSE course at Reading grappling with descriptors to a) help student progress and b) produce reliable scores. After outlining the problems I have faced, I discuss my embryonic thoughts on possible supplements to descriptors, based mainly on the work of Daisy Christodoulou: https://thewingtoheaven.wordpress.com/2017/01/03/making-good-progress-the-future-of-assessment-for-learning/.

‘Standards’ and ‘marking criteria’: honest attempts at impossible tasks? , Rita Balestrini, MLES
Despite their limitations, analytic marking systems have become widely accepted in Higher Education. Marking schemes that make use of rubrics are commonly used to increase the transparency of the assessment process and help students to understand what is expected from them. Unfortunately, neither is necessarily the case. Are we perhaps asking too much of marking criteria? How can we enhance the process of assessing writing and speaking skills in foreign languages? In this presentation, I draw on recent literature in the area of assessment and feedback in higher education to discuss these questions.