Peer Grading in Higher Education – Hadi Hosseini

phd020608s

Most educators regard peer assessment and peer grading as a powerful pedagogical tool to engage students in the process of evaluating and grading their peers while saving instructors’ time. This process helps improve students’ understanding of the subject matter and provides an opportunity for deeper reflection on the subject matter, accessing higher levels of Bloom’s thinking taxonomy.

Designing and distributing tasks and assignments for peer assessment should be as easy as assigning a few papers to each student and wait for the magic to happen, right? …. Not really!

As instructors, we care about the fairness of our evaluation methods and providing effective feedback. Yet, throwing this crucial responsibility to the shoulders of novice students who (hopefully) have just learned the new topic seems like an awfully risky behavior. There are two major concerns when it comes to peer grading; inflated (or deflated) grades and poor quality feedback. Both of these issues seem to be originating from the same sources of insincerity of the graders and the lack of effort each student invests on grading the peers [2]. To address these issues, recently researchers have raised the question of whether we can design a peer-grading mechanism that incentivizes sincere grading and discourages any type of secret student collusion.

The simplest possible design is to evaluate the quality of the peer reviews or simply put by “reviewing the reviews” [1, 3, 4, 5]. This procedure is bullet proof since no student can get away from a poor-quality feedback or deliberately assigning insincere grades. However, even though this technique may help us achieve the goal of involving the students in the higher levels of learning, in most situations this mechanism is either costly in terms of TA-Instructor time or simply impossible in large classes. In fact, this fully supervised approach defeats one of the main purposes of using peer grading by doubling or tripling the required grading effort: each marked assignment has to be reviewed by one or two TAs for quality.

As a partially automated solution, the system may randomly send a subset of graded papers to the Teaching Assistants (TAs) to perform a sanity check (instead of doing this for every single paper). In contrast, fully automated systems provide a meta-review procedure in which students evaluate the reviews by rating the feedback they have received [1, 3, 5] or by computing a consensus grade for assignments that are initially graded by at least two or three peer graders [5, 6].

In a different approach, students are treated as potential graders throughout the term and only those who pass certain criteria will be take the role of independent graders [6]. The premise is that once an individual reaches a level of understanding, he or she can now act as a pseudo-expert and participate in the assessment procedure. Of course, to ensure fair grading the system randomly chooses a subset of graded papers to be reviewed by the instructor.

Peer assessment is still in its infancy; nevertheless a number of researchers in various disciplines are developing new techniques to address the critical issues of efficiency, fairness, and incentives. Each of the above methods (and many others that exist in the peer-grading literature) could potentially be adopted depending on course characteristics and intended outcomes. I do believe that such characteristic, to very least, must include the following:

  • Skill/knowledge transferability: Do marking skills and the knowledge of a previous topic  automatically transfer to the next topic? If so, are they sufficient?
    For example, an essay-based course may contain similar marking guidelines in all its assignments and training students once could be sufficient in transitioning students to effective peer graders.
  • Course material and structure: How are the topics that are covered in the course dependent on one another? Is the course introducing various semi-independent topics, or are the topics all contribute to building a single overarching subject.

What do you think? Have you ever used peer-assessment in your classes?

 

References

  1. Cho, K., & Schunn, C. D. (2007). Scaffolded writing and rewriting in the discipline: A web-based reciprocal peer review system.Computers & Education48(3), 409-426.
  2. Carbonara, A., Datta, A., Sinha, A., & Zick, Y. (2015) Incentivizing Peer Grading in MOOCS: an Audit Game Approach, IJCAI.
  3. Gehringer, E. F. (2001). Electronic peer review and peer grading in computer-science courses.ACM SIGCSE Bulletin33(1), 139-143.
  4. Paré, D. E., & Joordens, S. (2008). Peering into large lectures: examining peer and expert mark agreement using peerScholar, an online peer assessment tool.Journal of Computer Assisted Learning,24(6), 526-540.
  5. Robinson, R. (2001). Calibrated Peer Review™: an application to increase student reading & writing skills.The American Biology Teacher63(7), 474-480.
  6. Wright, J. R., Thornton, C., & Leyton-Brown, K. (2015). Mechanical ta: Partially automated high-stakes peer grading. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education (pp. 96-101). ACM.

Published by

Hadi Hosseini

Hadi Hosseini is a Graduate Instructional Developer at the CTE and a PhD Candidate in the Cheriton School of Computer Science. Hadi's research interest lies at the intersection of artificial intelligence, decision theory, and microeconomics.

Leave a Reply