Involving students in the appraisal of rubrics for performance-based assessment in Foreign Languages

By Rita Balestrini, Department of Modern Languages and European Studies

 

 

Context

In 2016, in the Department of Modern Languages and European Studies (DMLES), it was decided that the marking schemes used to assess writing and speaking skills needed to be revised and standardised in order to ensure transparency and consistency of evaluation across different languages and levels. A number of colleagues teaching language modules had a preliminary meeting to discuss what changes had to be made, what criteria to include in the new rubrics and whether the new marking schemes would apply to all levels. While addressing these questions, I developed a project with the support of the Teaching and Learning Development Fund. The project, now in its final stage, aims to enhance the process of assessing writing and speaking skills across the languages taught in the department. It intends to make assessment more transparent, understandable and useful for students; foster their active participation in the process; and increase their uptake of feedback.

 

The first stage of the project involved:

  • a literature review on the use of standard-based assessment, assessment rubrics and exemplars in higher education;
  • the organization of three focus groups, one for each year of study;
  • the development of a questionnaire, in collaboration with three students, based on the initial findings from the focus groups;
  • the collection of exemplars of written and oral work to be piloted for one Beginners language module.

I had a few opportunities to disseminate some key ideas emerged from the literature review – School of Literature and Languages’ assessment and feedback away day, CQSD showcase and autumn meeting of the Language Teaching Community of Practice. Having only touched upon the focus groups at the CQSD showcase, I will describe here how they were organised, run and analysed and will summarise some of the insights gained.

 

Organising and running the focus groups

Focus groups are a method of qualitative research that has become increasingly popular and is often used to inform policies and improve the provision of services. However, the data generated by a focus group are not generalisable to a population group as a whole (Barbour, 2007; Howitt, 2016).

 

After attending the People Development session on ‘Conducting Focus groups’, I realised that the logistics of their organization, the transcription of the discussion and the analysis of the data they generate require a considerable amount of time and detailed planning . Nonetheless, I decided to use them to gain insights into students’ perspectives on the assessment process and into their understanding of marking criteria.

 

The recruitment of participants was not a quick task. It involved sending several emails to students studying at least one language in the department and visiting classrooms to advertise the project. In the end, I managed to recruit twenty-two volunteers: eight for Part I, six for Part II and eight for Part III. I obtained their consent to record the discussions and use the data generated by the analysis. As a ‘thank you’ for participating, students received a £10 Amazon voucher.

 

Each focus group lasted one hour, the discussions were entirely recorded and were based on the same topic guide and stimulus material. To open discussion, I used visual stimuli and asked the following question:

  • In your opinion, what is the aim of assessment?

In all three groups, this triggered some initial interaction directly with me. I then started picking up on differences between participants’ perspectives, asking for clarification and using their insights. Slowly, a relaxed and non-threatening atmosphere developed and led to more spontaneous and natural group conversation, which followed different dynamics in each group. I then began to draw on some core questions I had prepared to elicit students’ perspectives. During each session, I took notes on turn-taking and some relevant contextual clues.

 

I ended all the three focus group sessions by asking participants to carry out a task in groups of 3 or 4. I gave each group a copy of the marking criteria currently used in the department and one empty grid reproducing the structure of the marking schemes. I asked them the following question:

  • If you were given the chance to generate your own marking criteria, what aspects of writing/speaking /translating would you add or eliminate?

I then invited them to discuss their views and use the empty grid to write down the main ideas shared by the members of their group. The most desired criteria were effort, commitment, and participation.

 

Transcribing and analysing the focus groups’ discussions

Focus groups, as a qualitative method, are not tied to any specific analytical framework, but qualitative researchers warn us not to take the discourse data at face value (Barbour, 2007:21). Bearing this in mind, I transcribed the recorded discussions and chose discourse analysis as an analytical framework to identify the discursive patterns emerging from students’ spoken interactions.

 

The focus of the analysis was more on ‘words’ and ‘ideas’ rather than on the process of interaction. I read and listened to the discussions many times and, as I identified recurrent themes, I started coding some excerpts. I then moved back and forth between the coding frame and the transcripts, adding or removing themes, renaming them, reallocating excerpts to different ‘themes’.

 

Spoken discourse lends itself to multiple levels of analysis, but since my focus was on students’ perspectives on the assessment process and their understanding of marking criteria, I concentrated  on those themes that seemed to offer more insights into these specific aspects. Relating one theme to the other helped me to shed new light on some familiar issues and to reflect on them in a new way.

 

Some insights into students’ perspectives

As language learners, students gain personal experience of the complexity of language and language learning, but the analysis suggests that they draw on the theme of complexity to articulate their unease with the atomistic approach to evaluation of rubrics and, at times, also to contest the descriptors of the standard for a first level class. This made me reflect about whether the achievement of almost native-like abilities is actually the standard against which we want to base our evaluation.  Larsen-Freeman’s (2015) and Kramsch’s (2008) approach to language development as a ‘complex system’ helped me to shed light on the idea of ‘complexity’ and ‘non-linear relations’ in the context of language learning which emerged from the analysis.

 

The second theme I identified is the ambiguity and vagueness of the standards for each criterion. Students draw on this theme not so much to communicate their lack of understanding of the marking scheme, but to question the reliability of a process of evaluation that matches performances to numerical values by using opaque descriptors.

 

The third theme that runs through the discussions is the tension between the promise of objectivity of the marking schemes and the fact that their use inevitably implies an element of subjectivity. There is also a tension between the desire for an objective counting of errors and the feeling that ‘errors’ need to be ‘weighted’ in relation to a specific learning context and an individual learning path. On one hand, there is the unpredictable and infinite variety of complex performances that cannot easily be broken down into parts in order to be evaluated objectively, on the other hand, there is the expectation that the sum of the parts, when adequately mapped to clear marking schemes, results in an objective mark.

 

Rubrics in general seem to be part of a double discourse. They are described as unreliable, discouraging and disheartening as an instructional tool. The feedback they provide is seen as having no effect on language development as does the complex and personalised feedback that teachers provide. Effective and engaging feedback is always associated with the expert knowledge of a teacher, not with rubrics. However, the need for rubrics as a tool of evaluation is not questioned in itself.

 

The idea of using exemplars to pin down standards and make the process of evaluation more objective emerges from the Part III focus group discussion. Students considered pros and cons of using exemplars drawing on the same rationales that can be found debated in scholarly articles. Listening to, and reading systematically through, students’ discourses was quite revealing and brought to light some questionable views on language and language assessment that most marking schemes measuring achievement in foreign languages contribute to promote.

 

Conclusion

The insights into students’ perspectives gained from the analysis of the focus groups suggest that rubrics can easily create false expectations in students and foster an assessment ‘culture’ based on an idea of learning as steady increase in skills. We need to ask ourselves how we could design marking schemes that communicate a more realistic view of language development. Could we create marking schemes that students do not find disheartening or ineffective in understanding how to progress? Rather than just evaluation tools, rubrics should be learning tools that describe different levels of performance and avoid evaluative language.

 

However, the issues of ‘transparency’ and ‘reliability’ cannot be solved by designing clearer, more detailed or student-friendly rubrics. These issues can only be addressed by sharing our expert knowledge of ‘criteria’ and ‘standards’ with students, which can be achieved through dialogue, practice, observation and imitation. Engaging students in marking exercises and involving them in the construction of marking schemes – for example by asking them how they would measure commonly desired criteria like effort and commitment – offers us a way forward.

 

References:

Barbour, R. 2007. Doing focus groups. London: Sage.

Howitt, D. 2016. Qualitative Research Methods in Psychology. Harlow: Pearson.

Kramsch, C. 2008. Ecological perspectives on foreign language education. Language Teaching 41 (3): 389-408.

Larsen-Freeman, D. 2015. Saying what we mean: Making a case for ‘language acquisition’ to become ‘language development’. Language Teaching 48 (4): 491-505.

Potter, M. and M. Wetherell. 1987. Discourse and social psychology. Beyond attitudes and behaviours. London: Sage.

CASE STUDY: ASSESSMENT USING ELECTRONIC LEARNING JOURNALS

Dr Madeleine Davies, Department of English Literature

 

Objectives

  • To use an electronic Learning Journal to improve attendance, engagement and attainment on a Part 3 module I convene, ‘Virginia Woolf and Bloomsbury’
  • To determine whether a Learning Journal + assessed essay assessment pattern offers a viable alternative to the ‘assessed essay + exam’ model favoured by the Department of English Literature (this in conversation with the ‘Diversifying Assessments’ TLDF project I co-lead in DEL see http://blogs.reading.ac.uk/t-and-l-exchange/connecting-with-the-curriculum-framework-using-focus-groups-to-diversify-assessment/)
  • To improve my ‘return of feedback’ scores on my modules; hard copy marking has always been returned to my students within 10 days yet students select ‘3’ or ‘4’ for the ‘speed of feedback’ question in their module responses. I wanted to see whether online return of marked work within the same period ‘felt’ like ‘5’ to my students more than hard copy return did.

 

Context

The pedagogic aims of my Part 3 module, ‘Virginia Woolf and Bloomsbury’ can be summarised as follows:

  • To gradually construct, over 11 weeks, a detailed and advanced knowledge of Virginia Woolf’s often complex texts and ideas.
  • To develop student’s understanding of the socio-cultural, political and literary contexts of the inter-war period.
  • To enhance skills of close reading and critical knowledge.

 

These are challenges because Woolf’s ideas connect with theoretical models including feminism, structuralism and postmodernism. In addition, the important contexts of literary modernism and of post-impressionist art have to be taught in accessible ways so that they can be understood to an advanced level. There is a great deal to learn and only thirty teaching hours available in which to develop the level of required knowledge.

 

Before I introduced technology-enhanced assessment to the module, the assessment pattern involved the following stages:

 

  • one 1500-word formative essay in Week 5 – the instruction was, ‘answer on one text’. Rushed, late, or missing essays characterised this stage.
  • one 2500-word assessed essay in Week 11 – the instruction was, ‘demonstrate substantial knowledge of at least two texts’, one of which may be the formative assignment text.
  • a summer term exam – instruction, ‘answer on two texts, avoiding the texts used for the assessed essay.

 

Not only did this model create significant question-setting work, administrative time, frustration, and paper, but it also inadvertently facilitated inconsistent attendance. Students disappeared from classes in Weeks 9 and 10 as assessed essay deadlines approached, or as they calculated that they only ‘needed’ 4 texts for assessment; when those has been selected and stored under their belts, students disappeared. Tougher material was avoided altogether because the assessment pattern meant that it did not have to engaged with. The old system also caused essay-writing panic towards the end of the module, then exams-related stress, and both triggered the inevitable chain of ECF requests.

 

None of this was conducive to consistent, productive learning and to strong attainment. In addition, the old assessment system rewarded the best writers who were able to gloss ‘shallow’ knowledge effectively: these tended to be students from more traditional educational backgrounds so the assessment model was not heeding inclusivity guidelines because it only ‘recognised’ and rewarded one type of attainment and engagement.

 

Implementation

 

Increasingly dissatisfied with the assessment model, but remaining committed to the teaching and learning aims of the module, I switched to a Blackboard Learning Journal because the pedagogic principles could, I felt, be best achieved (perhaps could only be achieved) using technology.

 

The instruction given to students about the function of the Learning Journal is as follows:

‘The use of a Learning Journal as part of the assessment

on this module is designed to encourage and reward

consistent attention throughout the course, development

in your understanding, and thoughtful reflection on your

own learning. It should support you to identify and seek

solutions to any problems you encounter in your studies.

It also requires you to organise your time carefully in order

to make regular submissions, which is a vital skill in the

world of work.’

 

This instruction emphasizes ‘understanding’ (‘Mastery of the Discipline’ in the Curriculum Framework), self-motivated problem solving, and time-management (‘Graduate Skills’ in the Curriculum Framework). From a pedagogic point of view, ‘thoughtful reflection’ is being implicitly framed within the structure of continuous engagement, and this itself is understood within the language of ‘encouragement’ and ‘reward’.

 

The online Learning Journal requires students to submit 500 words every week, reflecting on the week’s teaching and textual material; after 5 weeks, two entries from the online Journal are assessed and feedback is given (this is the formative stage – no essay questions are necessary). The 10-week Journal concludes in a retrospective entry in Week 11 and there is an assessed essay due for submission 4 weeks later. There is no longer a summer term exam. The Journal is marked online and the mark for the journal is generated by consistent completion of every entry and by the quality of entry 10 plus 4 other entries selected by each student.

 

Students know that if they miss lectures and seminars they will struggle to complete the Journal so attendance is greatly enhanced: an average module attendance rate of 72% (2016-17) has leapt to 86% (2017-18) since Journal assessment has been implemented. The high level of attendance allowed me to deliver the teaching that I know works most effectively on this module because I can rely on various connections between ideas being understood. Further, because of attendance, students are in a far stronger position when they prepare to write their assessed essays so their anxiety is much reduced and they are able to submit their best work. It was notable that no ECFs were requested for extensions on this module in 2017-18 (18 students were enrolled) where 3 were submitted the previous year as the week 11 assessed essay deadline loomed into view.

There is no exam so my marking is reduced and a redundant element of assessment is removed.

 

Impact

 

The Learning Journal initially produced some anxiety amongst students because DEL does not use Learning Journals at Part 2 so this was the first time these students were managing them. At least 5 minutes at the beginning of several seminars had to be reserved for providing students with repeated information and reassuring them that, even though the different format and requirements of the Learning Journal felt unfamiliar and even ‘wrong’, they were following the remit correctly.

 

The Learning Journal information was placed on Bb but most of the 18-strong group did not read the materials on this site; this revealed our students’ resistance to consulting Bb. DEL students seem only to recognise information when it is presented in hard copy, so I had to declared surrender and circulated the Learning Journal Guidelines to students in this form.

 

The majority of students managed to submit weekly work without difficulty and on time. Some students were worried that the Learning Journal format did not seem to adequately prepare them for the more formal writing of the assessed essay. However, by Week 8, the majority of the students expressed their growing engagement with their Journals and, through them, with the module. I also found it interesting that students were more able than usual to forge connections between texts and ideas and I wondered whether this was because the weekly Journal entries cemented the reading and seminar discussions more securely.

 

As for the feedback sheets, this module was not scheduled for assessment in 2017-18. To gather informal feedback, I asked some of the students in the group to write down (anonymously) how they rated speed of feedback: ‘5’ was registered by every student who responded. I have no idea why precisely the same time period would be viewed as ‘3’ or ‘4’ when hard copy was used and as ‘5’ when electronic feedback was used, but the implications for student satisfaction scores are clear.

 

Connecting with the SLL ‘Diversifying Assessments’ project, it is clear that Learning Journals are an increasingly popular method of assessment in DEL. The results of a 2017 Survey Monkey poll in DEL (June 2017 – see http://blogs.reading.ac.uk/t-and-l-exchange/connecting-with-the-curriculum-framework-using-focus-groups-to-diversify-assessment-part-2/) suggest that Student Focus Groups had correctly identified that this form of assessment was capable of challenging the traditional essay in terms of student choice:

 

Reflections

  • In the example of EN3VW (‘Virginia Woolf and Bloomsbury’), technology has allowed me to employ a pedagogic model that was always perfectly suited to the module but that was not always enabling success because students’ engagement was a desired outcome rather than a clear requirement. With the Learning Journal, the pedagogy underpinning the module works effectively for the first time.

 

  • It is clear, however, that students require a great deal of guidance when they initially use a Learning Journal, and colleagues need to be aware that increasing a student’s freedom to write in less structured forms also increases their anxiety. Time has to be reserved for writing advice and this can dent seminar time. The time investment is, however, worth it because the quality of work presented in the Journals was of a very high standard.

Jess Phillips MP at the University of Reading (16th November 2017)

Dr Madeleine Davies (Department of English Literature)

The Vice-Chancellor’s Endowment Fund generously supported the Department of English Literature and the Department of Politics and International Relations in hosting Jess Phillips MP at the University of Reading last week.

Jess Phillips was invited to deliver a talk on the topic, ‘Finding your Voice’, and to engage in a Question and Answer session led by Dr Mark Shanahan from the Department of Politics and International Relations. A book-signing for the MP’s recent book, Everywoman, was organised with the help of Blackwell, and this took place after the talk.

185 people were in the audience on the night. Members of the wider community joined us (including some in the 15-18-year-old category), and the majority of the seats were taken by colleagues and students in roughly equal proportion. The University’s live Facebook stream shows that 3,465 views were recorded during the 90-minute broadcast.

A Twitter feed from the event provided a lively flow of the MP’s comments as well as audience responses. One tweet alone (presented below) was viewed by 1,334 people.

Jess Phillips herself added her ‘like’ to the feed.

Jess Phillips’ talk included her childhood experiences as a campaigner with parents who were both committed to socialist causes: she remembered attending a day-care centre run by activists and helping to produce the banners that would be used on the drive-way to Greenham Common. She also discussed a brief period of political apathy when, in the early years of the Blair governments, many situations improved and the need for constant campaigning declined (she noted that she was more a fan of Blair’s ‘early work’ than of his later concepts). The election of David Cameron reignited her political activism, and her years of experience with ‘Women’s Aid’, a refuge charity, finally persuaded her to make herself heard and to enter Parliament. Her speech also included issues of class and privilege, questions of fairness and responsibility, and all her comment was laced with wit, humanity, and a deep-seated commitment to social justice. In the speech and in the Q&A session that followed it, it was clear that Jess’s passion is for equality, not in the highly theorised sense of ‘academic feminism’, but in the ‘lived’ sense of fairness, human rights and plain decency.

All of us who met Jess were extremely impressed by her warmth and her wit: there was no gap between her public image and the real person. It was also a timely and much-needed reminder that there are many MPs who are politicians because they are driven by their convictions and who are defined by their integrity and compassion. Meeting heroes is a dangerous enterprise but not in this case.

Thank you to all colleagues and students who attended the event. Jess Phillips told me (and told many students too) how impressed she was with Reading students and I felt very proud of everyone who contributed so much to such an excellent evening.