Skip to main content

AI Tool Lets New Managers Practice Performance Reviews and Job Interviews

Media Inquiries
Name
Peter Kerwin
Title
University Communications & Marketing

A new artificial intelligence tool developed at Carnegie Mellon University’s Tepper School of Business(opens in new window) lets students practice conducting performance reviews and job interviews with AI agents based on large language models (LLMs) that behave like employees. The tool could help prospective managers level up their experience before they hit the job market. 

Anita Williams Woolley(opens in new window), a professor of organizational behavior and theory and the associate dean for research at the Tepper School, wanted to make her Managing People and Teams course more interactive. In the past she taught behavioral skills, things like leading a team or facilitating collaboration, using business case studies or classroom simulations.

"Using traditional tools, students might get one opportunity to practice and perhaps some feedback from peers," Woolley said. "But the quality of that feedback is highly variable and there is no guarantee that it is useful or accurate."

Woolley collaborated with Behnam Mohammadi,(opens in new window) a Ph.D. candidate and a teaching assistant for the course. He was working on a side research project called grAIdient, an app that could help instructors use LLMs to efficiently grade student papers and provide feedback on what they got right or wrong. Together, Mohammadi and Woolley saw another use for the tool. 

"The idea was that if we can grade student assignments, then we can also provide live feedback to student interactions with AI," Mohammadi said. 

The updated version of grAIdient used four carefully designed AI employees and job applicants and one "helper agent" to interactively teach management skills. Students typed to talk to the simulated employees, each with detailed background stories, unique personality traits and an AI-generated profile picture. When the conversation was complete, the helper agent provided feedback using the grading rubric Woolley developed for the assignment. After they reviewed the feedback, they could do the exercise again with a different "employee."

In one assignment, students had to conduct a performance review(opens in new window) of "JD," a longtime employee Woolley and Mohammadi categorized as "burnt out." Students had to break it to JD that he or she (gender, age, and other characteristics of the AI employee varied for each use) wasn’t meeting goals and wouldn't be eligible for a promotion. The goal of that assignment was both to manage the difficult conversation effectively as well as to listen and diagnose what was causing the problem. Then students were asked to propose solutions and apply management frameworks and principles learned in the course. 

Practicing with grAIdient instead of writing responses to case studies saved hundreds of hours of writing and grading, but also provided something new: a right answer. For example, there was a clear winner in an assignment designed to simulate an employee hiring process. 

"By design, some of the AI agents were more qualified than others," Mohammadi said. "When these students go to the real world they're going to have better knowledge of how to spot that." 

When students used grAIdient, their conversations were dynamic — what they said in the simulated experience shaped the rest of the conversation. Analyzing a case study means students were inherently responding to something that happened in the past, a very different experience than conducting interviews or performance reviews in the real world. 

"Students could get feedback on their interactions in private and then try it again with a different AI employee. There's no feasible way to offer that in a traditional classroom, and it's what they need in order to get better at any of this," Woolley said.

The assignments that utilized grAIdient were optional, but Woolley said all of her students chose to try it. She was pleasantly surprised to find that the practice students got using the app helped improve their scores on the final exam. 

"The students did just as well as past cohorts on the final exam, and in fact did significantly better on the parts of the question involving reasoning about and describing how to implement solutions to a problem, which is the part we really care about in developing future leaders," Woolley said. 

Anita Williams Woolley

Anita Williams Woolley

Behnam Mohammadi

Behnam Mohammadi

— Related Content —