Carnegie Mellon University
Eberly Center

Teaching Excellence & Educational Innovation

R&S Digest Header Image

What’s the Eberly Center reading and thinking about this month?

The Research and Scholarship Digest, published the first Monday of each month, consists of short summaries of recently peer-reviewed studies on teaching and learning topics. This digest offers a view into what we are reading and thinking about at the Eberly Center that:

• adds to our understanding of how students learn
• is potentially generalizable across teaching contexts in higher education
• provokes reflection on implications for our teaching and educational development practices.

We hope the readers of this digest will find it a useful resource for staying in-tune with the rapidly expanding education research literature.

December 2025

Chen, C. H., Fei, H. Y., & Tsai, C. C. (2025). Hierarchical analysis of in-service teachers’ barriers to technology-integrated instruction: A review of 2000-2024 publications. Computers & Education, 105509. 

This systematic review of 116 studies from 2000 to 2024 examines the four major barriers teachers face when integrating technology into instruction: (1) limited systemic support, (2) inadequate beliefs and attitudes, (3) classroom management challenges, and (4) lack of design thinking. While systemic barriers dominate across all contexts, the review reveals that higher education instructors experience disproportionately high levels of (2), (3), and (4). University instructors report more resistance and anxiety toward technology, greater difficulty managing participation and engagement in online and hybrid classes, and the highest rates of design-related challenges—particularly in developing effective online courses and aligning technology with complex disciplinary content. The review also shows that these internal and pedagogical barriers intensified during the COVID-19 pandemic. Overall, this study shows why technology integration is especially challenging in higher education and where we need to target our support.

https://doi.org/10.1016/j.compedu.2025.105509


Han, X., Peng, H., & Liu, M. (2025). The impact of GenAI on learning outcomes: A systematic review and meta-analysis of experimental studies. Educational Research Review, 100714. 

This recent meta-analysis of 68 studies investigated the impact of genAI on student learning outcomes, aiming to provide a current snapshot of the research landscape in this relatively new but rapidly growing domain. All studies that were included needed to use genAI technology (e.g., ChatGPT, Claude, Gemini, etc.), have sufficient methodological rigor, and measure at least one dimension of learning outcomes (e.g., cognitive, affective, behavioral, self-regulatory). The vast majority of studies (81%) were conducted in a higher education setting. Using a random-effects model, results yielded a meta-analytic effect size of 0.45, 95% CI [0.43, 0.47], with no evidence of publication bias. The effect sizes for cognitive and affective dimensions were similar (0.34 and 0.38, respectively), while the behavioral and self-regulatory dimensions had larger effects (1.08 and 0.74, respectively). Analyses also indicated that there is substantial heterogeneity across studies. Significant moderators included discipline, with natural sciences having the highest effect size (0.70), followed by humanities (0.46) and social sciences (0.39), and intervention duration, with short-term interventions demonstrating a larger effect (0.48) compared to long-term interventions (0.35). In total these results suggest that generative AI technologies can be used to augment student learning compared to traditional instruction, but there is much nuance still to be unpacked. Importantly, distinctions between tool-enhanced student product and authentic learning/transfer were not explored. Further discussion of all findings and implications is featured in the paper.

https://doi.org/10.1016/j.edurev.2025.100714


Owenz, M., Tewksbury, M., Cruz, C., & Owenz, M. B. (2025). Enhancing inclusive teaching measurement: The development of the power-sharing practices checklist. To Improve the Academy: A Journal of Educational Development44(2). 

This article presents a power-sharing practices checklist developed by the authors as a tool to identify and measure inclusive teaching practices related to power-sharing, which they define as any practice or policy that provides a space for students to exercise their voice and make choices. The goal of the checklist is to emphasize student perspectives of power-sharing practices. To identify items for the checklist, a focus group of 30 faculty members and a focus group of 30 students were convened to discuss teaching practices related to student voice and choice. A thematic analysis of this qualitative data revealed that items pertaining to power-sharing cluster around four sub-themes: activities, policies, assessments, and materials. The researchers worked independently to develop items related to those four sub-themes. The survey was then deployed to 276 students enrolled in three different institutions in Pennsylvania as well as to 32 faculty recruited from a US national sample. Survey participants completed the 31-item Power-Sharing Practices Checklist as well as five well-validated measures of academic engagement. The results indicate validity and reliability of the checklist and also reveal significant differences in faculty and student perceptions of power-sharing.

https://doi.org/10.3998/tia.6893


Shulgina, G., Adamovich, K., Zhang, H., Fanguy, M., Baldwin, M., & Costley, J. (2025). Comment type matters: Analysing the implementation of summary, problem/solution, and praise comments in peer feedback. Active Learning in Higher Education, 14697874251389104. 

This study investigated the relationship between peer feedback and writing improvement among 187 STEM graduate students at a Korean university participating in a flipped academic writing course. The research focused on an asynchronous online peer feedback activity where students exchanged and implemented suggestions on journal manuscript drafts using Google Docs.

The methodology employed a quantitative approach to analyze the feedback. All received comments were coded into three types: praise, summary, or problem/solution. Researchers compared initial and final drafts to determine the rate of comment implementation. Paired correlation was used to assess the connection between overall comment volume, implementation rate, and instructor-assessed writing performance (measured on a 0-10 rubric). Path analysis further explored how the receipt and implementation of specific comment types related to final scores.

The key result demonstrated that the quality of feedback, not just the quantity, drives performance gains. While receiving many problem/solution comments often indicated a weaker initial draft, the active implementation of these specific, constructive comments proved to be the decisive factor for improved student writing performance. This finding suggests that highly actionable, targeted feedback content is significantly more valuable than the sheer volume of implementation.

https://doi.org/10.1177/14697874251389104