Skip to main content
Ken Holstein - Human-Computer Interaction Institute

Ken Holstein

Assistant Professor, Human-Computer Interaction Institute

Ken Holstein's research focuses broadly on AI-augmented work and improving how we design and evaluate AI systems for real-world use.


Expertise

Topics:  Elections, Intelligence Augmentation, Applied Machine Learning, Artificial Intelligence, Human-Computer Interaction, Worker-Centered Design

Industries: Research, Education/Learning, Computer Software

Ken Holstein is an Assistant Professor in the Human-Computer Interaction Institute at Carnegie Mellon University, where he directs the CMU CoALA Lab. In addition to his position at CMU, Ken is an inaugural member of the Partnership on AI’s Global Task Force for Inclusive AI. He is also part of Northwestern’s Center for Advancing Safety of Machine Intelligence (CASMI) and the Jacobs Foundation’s CERES network.

Ken's research focuses broadly on AI-augmented work and improving how we design and evaluate AI systems for real-world use. Ken draws on approaches from human–computer interaction (HCI), AI, design, cognitive science, learning sciences, statistics, and machine learning, among other areas.

Ken is deeply interested in: (1) understanding the gaps between human and artificial intelligence across a range of contexts, and (2) using this knowledge to design systems that respect human work, elevating human expertise and on-the-ground knowledge rather than diminishing it. To support these goals, Ken's research develops new approaches and tools that support better incorporation of diverse human expertise across the AI development lifecycle.

Ken's work has been generously supported by the National Science Foundation (NSF), CMU’s Block Center for Technology and Society, Northwestern’s CASMI & UL Research Institutes, Institute for Education Sciences (IES), Cisco Research, Jacobs Foundation, Amazon Research, CMU’s Metro21 Smart Cities Institute, and Prolific.

Media Experience

‘Smart’ glasses for teachers help pupils learn  — Tes Magazine
“By alerting teachers in real-time to situations, the ITS [intelligent tutoring system] may be ill-suited to handle on its own. Lumilo facilitates a form of mutual support or co-orchestration between the human teacher and the AI tutor,” said Ken Holstein, lead author on the study, together with Bruce M. McLaren and Vincent Aleven.

These glasses give teachers superpowers  — The Hechinger Report
Lumilo is the brainchild of a team at Carnegie Mellon University. Ken Holstein, a doctoral candidate at the university, designed the app with significant input from teachers like Mawhinney who use cognitive tutors in their classrooms. The project treads new ground for the use of artificial intelligence in schools.

Funding New Research to Operationalize Safety in Artificial Intelligence  — Northwestern Engineering
Kenneth Holstein, assistant professor in the Human-Computer Interaction Institute at Carnegie Mellon University, will study how to support effective AI-augmented decision-making in the context of social work. In this domain, predictions regarding human behavior are fundamentally uncertain and ground truth labels upon which an AI system is trained — for example, whether an observed behavior is considered socially harmful — often represent imperfect proxies for the outcomes human decision-makers are interested in modeling.

In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT  — WIRED
Others working in tech also expressed misgivings about the letter's focus on long-term risks, as systems available today including ChatGPT already pose threats. “I find recent developments very exciting,” says Ken Holstein, an assistant professor of human-computer interaction at Carnegie Mellon University, who asked his name be removed from the letter a day after signing it as debate emerged among scientists about the best demands to make at this moment.

Education

B.S., Psychology (Cognitive focus), University of Pittsburgh
M.S., Human–Computer Interaction, Carnegie Mellon University
Ph.D., Human–Computer Interaction, Carnegie Mellon University

Spotlights

Accomplishments

CMU Teaching Innovation Award (2022 Prototyping Algorithmic Experiences (PAX))

Graduate Student Poster Grand Prize (2022 Grefenstette Tech Ethics Symposium)

Best Paper Award (2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML’23))

Best Paper Award (2023 ACM CHI Conference on Human Factors in Computing Systems (CHI’23))

Best Paper Award (2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT’23))

Affiliations

Association for Computing Machinery (ACM) : Member

Design Justice Network (DJN) : Member

Links

Event Appearances

Supporting Effective AI-Augmented Decision-Making in Social Contexts
Toward a Safety Science of AI, Northwestern University, Evanston, IL
February 2, 2026

Fostering Critical AI Literacy Among Frontline Workers, the Public, & AI Developers
HCI + Design Thought Leaders Lecture, Northwestern University, Evanston, IL
February 2, 2026

Designing for Complementarity in AI-Augmented Work
UCI Informatics Seminar Series, University of California Irvine (UCI), Irvine, CA
February 2, 2026

Articles

A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms  —  2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)

Zeno: An Interactive Framework for Behavioral Evaluation of Machine Learning  —  CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems

Investigating Practices and Opportunities for Cross-functional Collaboration around AI Fairness in Industry Practice  —  FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency

Understanding Frontline Workers’ and Unhoused Individuals’ Perspectives on AI Used in Homeless Services  —  CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems

Ground(less) Truth: A Causal Framework for Proxy Labels in Human-Algorithm Decision-Making  —  FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency

Research Grants

Supporting Effective AI-Augmented Decision-Making in Content Moderation
Block Center, $80,000
December 12, 1969

Supporting Effective AI-Augmented Decision-Making in Social Contexts
Center for Advancing Safety of Machine Intelligence (CASMI) and UL, $275,000
December 12, 1969

Bridging Policy Gaps in the Life Cycle of Public Algorithmic Systems
Block Center, $80,000
December 12, 1969

Scaffolding Responsible AI Practice at the Earliest Stages of Ideation, Problem Formulation and Project Selection
PwC, $350,193
December 12, 1969

AI-Augmented Illustration through Conversational Interaction
Prolific, $10,000
December 12, 1969

Photos

Videos