Carnegie Mellon University

Eberly Center

Teaching Excellence & Educational Innovation

Frequent Low-Stakes Assignments

Remote and hybrid modalities may open up additional opportunities for instructors to consider incorporating recurring, low-stakes assignments and assessments in place of some/all high-stakes assessments – i.e., more homework/quizzes, fewer exams. This assessment strategy provides students with more frequent opportunities for practice and feedback over the course of the semester while offering instructors meaningful data points pertaining to students’ learning and overall progress. Note that “low stakes” means the assessment gets assigned fewer points or perhaps may be scored based on completion rather than accuracy/quality. (This still gives practice opportunities but can reduce the grading burden.) As with traditional modes of instruction, designing assignments for hybrid and remote environments begins with alignment with the learning objectives of the course

Learning science research provides ample evidence that students learn by doing (Ericsson et al., 2003; Koedinger et al., 2015), that performance and retention improve with repeated practice (Healy et al., 1993; Martin et al., 2007;), and that feedback enhances learning efficiency (Balzer et al., 1989; McKendree, 1990). So by providing students with frequent low-stakes assignments – e.g., rather than only giving one or two high-stakes exams – you are giving students the practice and feedback they need to learn. And as an assessment strategy, students are demonstrating their learning across a larger sample of tasks and contexts, which leads to more accurate assessment of their proficiency. Another advantage of low-stakes assignments and assessments is that, on average, they show lower rates of cheating and plagiarism than high-stakes assessments. Finally, it is worth noting that, in our Spring 2020 survey of CMU instructors and students, low-stakes assignments were rated among the most helpful strategies for student learning.

Instead of giving a high-stakes exam during finals period, subsets of questions intended for the final exam can be pulled into shorter assessments administered every few weeks throughout the semester. Note: questions on these shorter assessments can still involve synthesis or integration of the material; they need not revert to simple recall questions.

  • These shorter assessments may be assigned/completed during or outside of class time.
  • While they may be scored at the individual student level, group-based feedback may be provided, e.g., in the form of a solution set or via in-class discussion of common errors and other patterns in students’ performance.

To answer the common question about “are we overloading our students by adding more things for them to do per week, especially if all of their courses are doing this” - as long as course instructors stick to the unit load, then it is a fair amount of work. In other words, for semester-long courses, aim to ensure that the time students spend on your course, on average per week, is approximately equal to the number of units. Note: all assignment and assessment types as well as class time should be factored into this calculation. 

  • To help students structure their time, it is important to frame what a typical week will look like in terms of the tasks they will need to complete. 
  • Instructors can also mention that there won’t be as big of a push during midterms/finals (e.g., if these exams are scaled down or eliminated), but that it will be a more consistent amount of weekly work AND that their grade won’t depend so heavily on the high-stakes assessments.
  • Consider options that allow for flexibility in the event of technology failures, such allowing students to drop the lowest score.

Many course instructors assign interim assignments leading up to a final paper or project. This approach sequences assessment tasks over a number of weeks or months which serves to reduce procrastination and completion of a substantial project or report immediately before the deadline. 

  • Depending on the learning goals of the assignment, students can receive points for effort (e.g., submitting milestones for working on the deliverables) or for quality of responses. 
  • Just as you remind students to spread their work more consistently throughout the semester, rather than a big push at midterm and finals, this is also good to keep in mind for yourself as you plan for your own grading.
  • To help mitigate the grading burden, consider where you can provide group-level formative feedback, either via a Canvas announcement or verbally during a synchronous session.
  • Regardless of grading approach, formative feedback can be given during synchronous sessions to avoid grading load later in the semester.

Given the potential of a variety of modalities including in-person, hybrid, and remote, it is important to communicate your expectations for meaningful participation, particularly via Zoom.

  • Consider providing a rubric for participation so that students are aware of your expectations. An example of a rubric from a History class in Dietrich College is available here.
  • Use technology such as the Zoom chat feature to provide opportunities for real-time communication and access for students who might be hesitant to participate in a remote environment.
  • For hybrid courses, ensure that students have equal access to participation opportunities regardless of their physical location. 
  • Consider providing opportunities for asynchronous participation including discussion boards in Canvas, Q&A in Piazza, or written reflections submitted as a Canvas assignment.

Balzer, W.K., Doherty, M.E., & O’Connor, R. (1989). Effects of cognitive feedback on performance. Psychological Bulletin, 106(3), 410-433.

Ericsson, K.A., Krampe, R.T., & Tescher-Romer, C. (2003). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100(3), 363-406.

Healy, A.F., Clawson, D.M., McNamara, D.S. (1993). The long-term retention of knowledge and skills. In D.L. Medin (Ed.) The Psychology of Learning and Motivation (pp. 135-164). San Diego, CA: Academic Press.

Koedinger, K.R., Kim, J., Jia, J., McLaughlin, E.A., & Bier, N.L. (2015). Learning is not a spectator sport: Doing is better than watching for learning from a MOOC. In Proceedings of the Second (2015) ACM Conference on Learning at Scale, 111-120.

Martin, F., Klein, J.D., & Sullivan, H. (2007). The impact of instructional elements in computer-based instruction. British Journal of Educational Technology, 38, 623-636.

McKendree, J. (1990). Effective feedback content for tutoring complex skills. Human-Computer Interaction, 5(4), 381-413.