Flipping assessment?! by Dr Karen Ayres

Like many colleagues, I have attended a number of interesting talks on the ‘flipped classroom’ approach, whereby, in a role reversal, the main delivery of information takes place outside of the classroom, and the contact time is used instead for reinforcing learning. I haven’t quite identified yet how I can make use of this approach in my own teaching, but I have been inspired to try ‘flipping’ an assessment in one of my modules. Admittedly this may be the wrong terminology to use here, but what I mean by this is a role reversal when it comes to assessment. In one of my modules this year, instead of asking students to produce a guide on using a statistics computing package, which I would usually then assess for clarity, accuracy and effectiveness as a training resource, I instead provided students with a piece of work I had created (with deliberate errors and other problems!) and asked them to assess it as if they were the lecturer.

The approach of engaging students in marking is of course not new, since peer marking is used by many lecturers. However, this was not a standard peer marking exercise, because I did not provide them with a marking scheme, nor a set of solutions to use. I left it to the students to decide how they wanted to split up the 100 marks, and what they wanted to award marks for. By doing it this way, my aim was to see whether they knew what the key elements of an effective training guide was, by showing how they thought one should be marked. They were also asked to provide effective feedback on the work, on the understanding that feedback should be constructive and should benefit learning, and that the feedback should justify the mark they awarded (I didn’t use the term ‘feed-forward’, but did ask them to consider what they would find useful if the work being commented on was their own). My aim here was to determine whether they understood how the key elements of an effective training guide should be put into practice, and also to see if they were able to identify technical inaccuracies in the work. It is this last point which I feel the flipped assessment approach may be particularly beneficial for. Often students may misunderstand something but not include it in their own piece of work, meaning that this misunderstanding escapes identification. By asking that they mark work which includes errors, and by requiring that they give feedback about why it’s an error, I feel that I’m demanding a deeper level of subject knowledge from them than I would be doing in a traditional assignment. Of course, it’s then important that I go through these errors with them afterwards, to make sure that no misunderstandings have been created!

I’m pleased to report that I was very impressed with what my students did on this assignment (obviously I had to assess their assessment!). It was a group assignment, and all groups produced a very detailed marking scheme, in a grid layout – I hadn’t given them any pointers on this, so the fact that they decided to do it like this was encouraging. The written feedback that they provided on the script they were given was similarly impressive, and in some cases of the same standard that my colleagues and I routinely provide. What was more interesting was the fact that alongside their various annotations on the script, they provided a separate, very detailed, document listing errors and issues with the work, including further feed-forward comments. If students all expect this multiple level of detailed feedback on their own work as standard, this might explain why some are unhappy with the (still reasonably detailed) feedback they do receive!

In summary, my aim in designing an assessment in a ‘flipped’ way was to encourage a deeper level of thought, and to assess a deeper level of understanding, than I felt was achieved by the usual approach. I feel that those who are tasked with assessing the knowledge and learning of others need to have a deeper than usual understanding of both the technical and communication sides of the discipline (certainly in mathematics and statistics). After the success of this trial run I will definitely be looking at how else I can use this different type of assessment in my other modules. My next step is to consider how to use something like this for a quantitative assignment, for example by asking them to both produce their own set of solutions with marking scheme, and then to use them to mark my piece of work that I submit to them for assessment!

This entry was posted in Academic Skills, Assessment & Feedback, Student Engagement and tagged , , . Bookmark the permalink.