GAITAR@Scale 1
Impacts of asynchronous learning modules on genAI competency in college students (Fall 2024)
Research Question(s):
- Can a relatively short, asynchronous, self-paced set of genAI learning modules improve genAI competency (i.e., knowledge, skills, and self-efficacy)?
- Are impacts of the intervention experienced equitably by all students?
Study Participants:
1,477 students across 53 different courses taught by 46 instructors
- Control (no modules): n = 677 students
- Treatment (modules): n = 800 students
Intervention & Study Design:
With input from faculty, staff, and student genAI subject matter experts, four learning modules were designed by the office of the Vice Provost for Teaching & Learning Innovation and the Eberly Center for Teaching Excellence & Educational Innovation to foster genAI competency. GenAI competency is defined as the combination of knowledge, skills, and self-efficacy (Chiu et al., 2024).
Students independently engaged with these 90-minute modules asynchronously. The learning experience included instruction, examples, practice, and immediate feedback to support the following learning objectives:
- Describe the basic mechanisms behind how generative AI tools are built and how they work,
- Explain why students’ decisions about and applications of generative AI tools will differ across individuals, contexts, tasks, and goals,
- Analyze the ethical implications and other concerns of these tools to be a responsible user and/or creator,
- Identify and apply strategies to appropriately and responsibly use generative AI for a given educational task, and
- Report an increased level of self-efficacy for appropriately using generative AI tools in educational situations.
In Fall 2024, 46 instructors across the university volunteered to enroll at least one course in this study. Each course was randomly assigned to either the treatment or control condition. At the very beginning of the semester, students in both conditions completed a pretest assessing knowledge, skills, and self-efficacy. Students in the treatment condition then had 48-hour access to the genAI learning modules before taking a post-test 4 days later. Students in the control condition did not have access to the modules until after they had completed the post-test during the same time window as the treatment condition (see Figure 1).
Figure 1. Study design for the genAI modules experiment.

Data Sources:
- Pre and post:
- Knowledge measured by multiple choice and matching questions about genAI and its responsible use.
- Skills measured by authentic tasks allowing students to demonstrate:
- Authentic Task 1: Prompt engineering
- Authentic Task 2: Evaluation of output
- Attitudes measured by self-efficacy surveys regarding genAI competency
- Student demographic data obtained from the university registrar.
Findings:
Research Question 1: Engaging with the learning modules significantly improved students’ knowledge, prompt engineering skills, and self-efficacy above and beyond students who did not complete the modules (control). The modules did not affect students’ overall output analysis skills, as measured by one of the authentic tasks

Figure 2. There was a significant main effect of time (pre-to-post change), F(1,172) = 31.57, p < .001, ηp2 = .16. There was a significant main effect of condition, F(1,172) = 4.88, p = .03, ηp2 = .03. Importantly, there was a significant time x condition interaction, F(1,172) = 39.60, p <.001, ηp2= .19. While both the treatment and control students improved over time, the modules significantly improved treatment students’ genAI knowledge above and beyond other influences on students’ performance.

Figure 3. There was no significant main effect of time, F(1,171) = .23, p = .63, ηp2 = .00. There was a significant main effect of condition, F(1,171) = 6.76, p = .01, ηp2 = .04, indicating that students in the treatment condition performed better overall (M = 56.40%, SE = 2.01%) than students in the control condition (M = 49.04%, SE = 1.99%). However, the predicted time x condition interaction was not significant, F(1,171) = 1.53, p = .22, ηp2 = .01.
Research Question 2: Engaging with the learning modules led to equitable outcomes in genAI knowledge, prompt engineering skills, and self-efficacy for all student demographics analyzed (discipline, sex, race, class year, first-gen).
Eberly Center’s Takeaways:
Research Question 1 and Research Question 2: A set of widely accessible, discipline-agnostic learning modules designed with university subject-matter experts improved students’ generative AI competency. Students experienced the positive impact of this intervention equitably. We also observed increases in knowledge and self-efficacy for the control condition, likely due to practice and testing effects, and/or other ambient learning opportunities. However, the experimental nature of this study means that engaging with the modules significantly improved genAI knowledge and self-efficacy above and beyond other influences on students’ genAI competency.
As such, this 90-minute, asynchronous online learning intervention is an empirically-validated active learning tool for building students’ genAI competency. Whereas previous interventions have focused primarily on students’ knowledge of genAI, the present study also measured the impact of the modules on students’ skills and their self-efficacy for effectively and ethically using genAI for educational purposes.
Students’ genAI competency is increasingly important to employers (Cengage Group, 2024). Rather than relying on individual instructors’ efforts to foster genAI competency, CMU designed these modules to be widely accessible and scalable intervention to support work-force development at any point in the curriculum. Consequently, CMU has incorporated portions of the modules into a required online course taken by all first-year students. We also provide these resources for faculty interested in foundational upskilling regarding genAI.
Future directions include updating the modules’ content to reflect new genAI developments, and strengthening the instruction, practice, and feedback on the critical analysis of output.
References
Chiu, T. K., Ahmad, Z., Ismailov, M., & Sanusi, I. T. (2024). What are artificial intelligence literacy and competency? A comprehensive framework to support them. Computers and Education Open, 6, 100171.
Cengage Group (2024). 2024 graduate employability report: Preparing students for the genAI-driven workplace. Cengage Group. https://cengage.widen.net/s/bmjxxjx9mm/cg-2024-employability-survey-report
Some relatively recent work on AI bias:
Guo, S., Wang, Y., Yu, J., Wu, X., Ayik, B., Watts, F. M., ... & Zhai, X. (2025, July). Artificial intelligence bias on English language learners in automatic scoring. In International Conference on Artificial Intelligence in Education (pp. 268-275). Cham: Springer Nature Switzerland.