The effect of remote learning on a peer review assessment

As the convenor of the Neuroscience Fundamentals (NEUR2201) course, I continuously reflect on and analyse the effect of assessments on student learning and my own teaching practice. NEUR2201 is a stage 2 introductory course in neuroscience with enrolment around 80-90 students each year. The course is divided into five modules, one introductory week followed by four fortnightly modules on current neuroscience topics. 

At the end of each of the five modules in NEUR2201, key concepts and learning outcomes are assessed using peer review of short answer questions (SAQ). Peer assessment has been shown to enable deeper understanding of content and can increase academic performance (Double et al., 2020; Reinholz, 2016; Topping, 1998). The assessment involves student answering one SAQ followed by reviewing two of their peer’s SAQs using a marking rubric. During the peer review stage of the assessment, we encourage discussions between the students and tutors and other students. Immediately following the assessment, grades and feedback are released and students can “flag” if they think they have received an unjust mark. In 2019 the Moodle Workshop tool had been used for peer review. With all courses switching to remote learning in 2020, I saw an opportunity to investigate the effects of remote learning on peer review assessment and the use of online learning resources by using Moodle Learning Analytics to analyse students’ online behaviour and seeing whether that behaviour correlated in any way to student grades. I had expected the successful use of the online Moodle Workshop tool in NEUR2201 for peer review assessment in 2019 would have allowed a smooth transition of this assessment to the course’s fully remote mode in 2020. However, I discovered some unforeseen challenges—the Moodle Workshop tool did not automatically save SAQ answers; students paid less heed to the time limits without a timer or invigilation; and Blackboard Collaborate live-chat sessions were dominated by technology rather than pedagogy. When I analysed these issues, I found that 50% of students’ SAQ answers in the first assessment were either not submitted or became “lost” in the system, but by the final assessment there were 89% successful submissions. Though concerned the 2020 cohort may have been disadvantaged due to these technical issues, my analysis showed no significant differences in student performances for the peer review assessment between the 2020 and 2019 cohorts (Mean ± S.E.M, 16.3 ± 0.3, 2020 vs. 16.9 ± 0.2, 2019; p=0.06, unpaired t-test), and a significant improvement in the final exam SAQs (74.5 ± 1.3 %, 2020 vs. 67.3 ± 2.1 % and 61.1 ± 1.7 % (2019, p=0.04 and 2016, p<0.0001, respectively; one-way ANOVA). I also analysed the possibility of increased marking leniency in 2020 but again found no differences between the two cohorts. I presented this research Faculty- and UNSW-wide by invitation at the 2020 Medicine & Health Education Forum (Cederholm, 2020a) and the 2020 UNSW Learning & Teaching Forum (Cederholm, 2020b). Furthermore, I also disseminated my findings nationally, as my peer-reviewed abstract was accepted for presentation at the Australian Physiological Society (AuPS) Education Forum (Cederholm, 2020c).

In addition, one of the potential issues with peer assessment is that the grades given by students are significantly higher than from the grades given by academic staff. For this purpose, I evaluated the similarity between the student markers and grades awarded by the convenors for the same assessments over a five-year period. I found that there was very little difference between student and convenor grades, with most grades agreeing within 1 mark out of 10. For example, in 2022, 76% of students were within 1 mark of the convenors grade. I presented my findings at the 2022 AuPS Education Forum (Cederholm, 2022). This was a great outcome and showed that there is not a high proportion of students being advantaged or disadvantaged using peer assessment. Convenors can therefore randomly mark 20% of the assessments and be confident in a fair process. However, I found some evidence that students are still developing their skills in judging if their marks are fair as shown by a small proportion of unjust grades not being “flagged”. Student feedback also pointed out that ‘they would like more practice in their peer review skills’. I intend to make changes in 2023 to address this. For example, I will set aside class time in week 1 to introduce students to the idea of peer review, brief them on how to carry out a review and then have them do a practice review as a group.

 

Double KS, McGrane JA, Hopfenbeck TN (2020). Educational Psychology Review 32:481-509.

Reinholz D (2016). Assessment & Evaluation in Higher Education 41(2):301-315.

Topping K (1998). Review of Educational Research 68(3):249-276.

Cederholm JME, Goulton CS, Vickery RM, Moorhouse AJ (2020a). UNSW Faculty of Medicine & Health Education Forum, 4th December (oral).

Cederholm JME, Goulton CS, Vickery RM, Moorhouse AJ (2020b). UNSW Learning and Teaching Forum, “Learning without limits: Leading the change”, 19-20th November (poster).

Cederholm JME, Goulton CS, Vickery RM, Moorhouse AJ (2020c). Proceedings of Australian Physiological Society, Physiology Education Forum, 25th November (oral).

Cederholm JME, Goulton CS, Vickery RM, Moorhouse AJ (2022). Proceedings of Australian Physiological Society, Physiology Education Forum, 20-23rd November (oral).