Engaging a Large Cohort with Low-Stakes Quizzes
| Lecturer: | Pia O'Farrell (pia.ofarrell@dcu.ie) |
| Discipline: | Assessment |
| Level: | First year undergraduate |
| Class Size: | 400+ |
| Mode of Delivery: | Face-to-face, plenary sessions |
Introduction
The challenge of engaging a large class of over 400 first-year students and ensuring they are keeping pace with new concepts is a common one. This case study details how a lecturer in Classroom Assessment addressed this by integrating low-stakes, in-class Loop quizzes as a key part of her module's continuous assessment. The strategy not only fostered consistent student engagement but also provided a structured, scaffolded approach to learning while reinforcing academic integrity. The design of the quizzes and the in-class process created a supportive environment that accommodated diverse student needs and ensured students took ownership of their learning.
What was the teaching problem/issue/challenge you faced?
My main challenge was designing an assessment process for a very large first-year module (over 400 students) that would promote continuous engagement with the material from the beginning of the semester. I was concerned that students would fall behind and then resort to "cramming" at the end of the term.
I needed a way to check for understanding and provide regular feedback without creating an impossible marking load. The traditional approach of a single, high-stakes final exam felt impersonal and didn't suit all students. I wanted to create a more relaxed and fair assessment environment that still maintained academic rigor.
What specifically did you decide to do?
I designed a series of in-class quizzes using the Loop quiz tool. The quizzes were a key part of the continuous assessment and were implemented in the following ways:
- Low Stakes and Scaffolding: I included three quizzes throughout the semester. The first quiz was weighted lower to help students understand what was expected of them and to introduce them to the process in a low-stress environment.
- Encouraging Attendance and Engagement: The quizzes were delivered during our double class session. I used the attendance tool with a QR code at the start of the class, followed by the quiz in the last 15-20 minutes. This approach led to very high attendance, with over 400 students consistently present. The quizzes were also designed to be challenging and conceptual, ensuring students had to actively engage with the material rather than just memorising facts.
- Accessibility and Fairness: I designed the process to be inclusive. Because the class was a double session, I used the 10-minute break to troubleshoot any technical issues students might have (e.g., problems with phones or logging in). This ensured a fair process and held students accountable for raising any issues in a timely manner. I also intentionally made the quiz available for a period of time, allowing students to complete it in class or find a quiet space outside if they preferred, accommodating different learning needs and comfort levels.
- Promoting Ownership: I was very explicit with my expectations. Students were reminded multiple times—both via email before class and verbally at the start and break—that it was their responsibility to ensure they had signed in and to flag any problems immediately.
How did it work in practice?
This use of quizzes proved to be a highly effective strategy. It provided the following benefits:
- High Engagement: The high attendance and active participation were clear indicators of student engagement. The structured, in-class quizzes incentivized students to show up and prepared them for the weekly material.
- Scaffolding and Support: The quizzes gave students a chance to test their knowledge without the high pressure of a final exam. The lower weighting of the first quiz helped scaffold the students into the process, allowing them to adjust to a new style of assessment.
- Fairness and Inclusivity: The double class format was invaluable for providing a dedicated time to address technical issues, which is crucial in a large cohort. By allowing students some flexibility in where they completed the quiz, I was able to cater to a wider range of student needs, proving that "the loop quiz situation... can be designed to suit everybody."
- Lecturer and Student Empowerment: The process created a clear sense of ownership. Students were responsible for their own engagement and for flagging issues, while I felt confident that students were engaged and learning.
What are key reflections on this approach/process?
- Clarity is Crucial: Providing crystal-clear instructions about expectations, technology requirements, and troubleshooting procedures is essential, especially with a large class. The consistent, repeated communication helped minimize issues.
- Design for Fairness: The double-class format was a game-changer for me. Building in dedicated time for troubleshooting during the break was a simple but effective way to ensure a fair process for all students.
- Communicate the "Why": Students appreciated the rationale behind the quizzes. They understood that the regular, low-stakes assessments were designed to help them, not just to add to their workload.
- Building a Buzz: I found that the in-class quizzes created a positive buzz and excitement that a single exam never could. Students arrived ready to engage, knowing that their participation would directly contribute to their learning and grade. This approach shifted the focus from high-stakes testing to consistent, meaningful engagement.
Further Reading on Scaffolding Learning through Formative Quizzes
Morris, R., Perry, T., & Wardle, L. (2021). Formative assessment and feedback for learning in higher education: A systematic review. Review of Education, 9(3), e3292. https://doi.org/10.1002/rev3.3292
This systematic review highlights how formative assessments are most effective when they are frequent, low-stakes, and embedded within teaching. It supports Pia’s approach by showing that quizzes which provide timely feedback, build gradually in challenge, and are integrated across the semester can demonstrably improve student engagement and learning outcomes in large cohorts.
Nicol, D., & Macfarlane‑Dick, D. (2006). Formative assessment and self‑regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. https://doi.org/10.1080/03075070600572090
Nicol & Macfarlane-Dick’s model shows how regular, low-stakes quizzes can build autonomy and support students in monitoring their own progress. This supports Pia’s assessment strategy of encouraging students to take ownership of their learning.
Roediger, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15(1), 20–27. https://doi.org/10.1016/j.tics.2010.09.003
Roedinger and Butler argue that retrieval practice not only checks understanding but also strengthens memory, reducing the risk of students cramming at the end of term. This article outlines some of the cognitive science basis of why Pia’s regular quizzes work.
Gibbs, G., & Simpson, C. (2004). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1, 3–31. Retrieved from https://eprints.glos.ac.uk/3609
Gibbs and Simpson argue that assessment should guide learning through clear expectations, fairness, and structured feedback, which aligns closely with Pia’s use of scaffolding (a lower-stakes first quiz, then building up).