Skip to main content
Vincent Conitzer -

Expertise

Topics:  Ethics in AI, Machine Learning, Artificial Intelligence, Computer Science

Vincent Conitzer is Professor of Computer Science (with affiliate/courtesy appointments in Machine Learning, Philosophy, and the Tepper School of Business) at Carnegie Mellon University, where he directs the Foundations of Cooperative AI Lab (FOCAL). He is also Head of Technical AI Engagement at the Institute for Ethics in AI, and Professor of Computer Science and Philosophy, at the University of Oxford.

Previous to joining CMU, Conitzer was the Kimberly J. Jenkins Distinguished University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. He received Ph.D. (2006) and M.S. (2003) degrees in Computer Science from Carnegie Mellon University, and an A.B. (2001) degree in Applied Mathematics from Harvard University.

Conitzer has received the 2021 ACM/SIGAI Autonomous Agents Research Award, the Social Choice and Welfare Prize, a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, an NSF CAREER award, the inaugural Victor Lesser dissertation award, an honorable mention for the ACM dissertation award, and several awards for papers and service at the AAAI and AAMAS conferences. He has also been named a Guggenheim Fellow, a Sloan Fellow, a Kavli Fellow, a Bass Fellow, an ACM Fellow, a AAAI Fellow, and one of AI's Ten to Watch. He has served as program and/or general chair of the AAAI, AAMAS, AIES, COMSOC, and EC conferences. Conitzer and Preston McAfee were the founding Editors-in-Chief of the ACM Transactions on Economics and Computation (TEAC). With Jana Schaich Borg and Walter Sinnott-Armstrong, he authored "Moral AI: And How We Get There."

Media Experience

Gen AI's Accuracy Problems Aren't Going Away Anytime Soon, Researchers Say  — CNET
Vincent Conitzer (School of Computer Science) says the industry is still far from developing reliable and trustworthy models, with many researchers doubting that artificial general intelligence is on the horizon anytime soon. "An AI system, it might just claim to be very confident about something that's completely nonsense," said Conitzer.

DeepMind claims its AI performs better than International Mathematical Olympiad gold medalists  — TechCrunch
Google DeepMind’s AI system AlphaGeometry2 has outperformed the average gold medalist in solving geometry problems from the International Mathematical Olympiad. “It is striking to see the contrast between continuing, spectacular progress on these kinds of benchmarks, and meanwhile, language models, including more recent ones with ‘reasoning,’ continuing to struggle with some simple commonsense problems,” said Vince Conitzer (School of Computer Science.

Two misuses of popular AI tools spark the question: When do we blame the tools?  — Fortune
Two recent incidents highlight concerns about AI misuse - a man used ChatGPT to plan an attack in Las Vegas, and AI video tools were exploited to create harmful content. These events sparked debate about regulating AI and holding developers accountable for potential harm caused by their technology. Carnegie Mellon University professor Vincent Conitzer explained that “our understanding of generative AI is still limited" and that we can't fully explain its success, predict its outputs, or ensure its safety with current methods.

How Forbes Compiled The 2024 AI 50 List  — Forbes
Expert Judge: Vincent Conitzer is a professor of computer science at Carnegie Mellon University, where he directs the Foundations of Cooperative AI Lab, which studies foundations of game theory for advanced, autonomous AI agents. He is also a professor of computer science and philosophy at the University of Oxford, where he is the head of technical AI engagement at the Institute for Ethics in AI.

The Excerpt podcast: AI has been unleashed. Should we be concerned?  — USAToday
The unleashing of powerful Artificial Intelligence into the world, with little to any regulation or guardrails, has put many people on edge. It holds tremendous promise in all sorts of fields from healthcare to law enforcement, but it also poses many risks. How worried should we be? To help us dig into it, we're joined by Vince Conitzer, Head of Technical AI Engagement at the Institute for Ethics in AI at the University of Oxford.

Deepfakes Are Evolving. This Company Wants to Catch Them All  — Wired
Vincent Conitzer, a computer scientist at Carnegie Mellon University in Pittsburgh and coauthor of the book Moral AI, expects AI fakery to become more pervasive and more pernicious. That means, he says, there will be growing demand for tools designed to counter them. “It is an arms race,” Conitzer says. “Even if you have something that right now is very effective at catching deepfakes, there's no guarantee that it will be effective at catching the next generation. A successful detector might even be used to train the next generation of deepfakes to evade that detector.”

How the University of Michigan Is Selling Student Data to Train AI  — MSN
“My first reaction is one of skepticism,” Vincent Conitzer, an AI ethics researcher at Carnegie Mellon University, told The Daily Beast. “Also, even taking this message mostly at face value, I suppose it may just all be based on recordings and papers that are anyway in the public domain.”

The Metaverse Flopped, So Mark Zuckerberg Is Pivoting to Empty AI Hype  — MSN
As for what this hypothetical AGI would look like, Vincent Conitzer, director of the Foundations of Cooperative AI Lab at Carnegie Mellon University and head of technical AI engagement at the University of Oxford's Institute for Ethics in AI, speculates that Meta could start with something like Llama and expand from there. "I imagine that they will focus their attention on large language models, and will probably be going more in the multimodal direction, meaning making these systems capable with images, audio, video," he says, like Google‘s Gemini, released in December

AI automated discrimination. Here’s how to spot it.  — Vox
For many Americans, AI-powered algorithms are already part of their daily routines, from recommendation algorithms driving their online shopping to the posts they see on social media. Vincent Conitzer, a professor of computer science at Carnegie Mellon University, notes that the rise of chatbots like ChatGPT provides more opportunities for these algorithms to produce bias. Meanwhile, companies like Google and Microsoft are looking to generative AI to power the search engines of the future, where users will be able to ask conversational questions and get clear, simple answers.

AI Chat Bots Are Running Amok — And We Have No Clue How to Stop Them  — Rolling Stone
“One common thread” in these incidents, according to Vincent Conitzer, director of the Foundations of Cooperative AI Lab at Carnegie Mellon University and head of technical AI engagement at the University of Oxford’s Institute for Ethics in AI, “is that our understanding of these systems is still very limited.”

Could AI swamp social media with fake accounts?  — BBC News
"Something like ChatGPT can scale that spread of fake accounts on a level we haven't seen before," says Vincent Conitzer, a professor of computer science at Carnegie Mellon University, "and it can become harder to distinguish each of those accounts from human beings."

Education

Ph.D., Computer Science, Carnegie Mellon University
A.B., Applied Mathematics, Harvard University

Spotlights

When do we blame the tools?
(January 13, 2025)

Accomplishments

Honorable Mention for Best Paper Award, HCOMP 2022 (2022)

Oxford University Press’ “Best of Philosophy” (2021)

IFAAMAS Influential Paper Award (2022)

ACM/SIGAI Autonomous Agents Research Award (2021)

Affiliations

Cooperative AI Foundation : Advisor

Links

Event Appearances

Social choice for AI ethics and safety
July 2024 | 17th Meeting of the Society for Social Choice and Welfare (SSCW-24), Paris, France
October 10, 2025

Social Choice for AI Alignment
June 2024 | 14th Oxford Workshop on Global Priorities Research, Oxford, UK
October 10, 2025

Special Session on Alternative Models for Fairness in AI Social Choice for AI Alignment
January 2024 | International Symposium on AI and Mathematics (ISAIM), Fort Lauderdale, FL
October 10, 2025

Articles

Should Responsibility Affect Who Gets a Kidney?  —  Responsibility and Healthcare

Computing optimal equilibria and mechanisms via learning in zero-sum extensive-form games  —  Advances in Neural Information Processing Systems

Similarity-based cooperative equilibrium  —  Advances in Neural Information Processing Systems

Pacing equilibrium in first price auction markets  —  Management Science

Safe Pareto improvements for delegated game playing  —  Autonomous Agents and Multi-Agent Systems

Patents

Research Grants

Foundations of Cooperative AI Lab at Carnegie Mellon
Center for Emerging Risk Research (CERR), $3,000,000
October 10, 2025

Foundations of Cooperative AI Lab at Carnegie Mellon
Cooperative AI Foundation (CAIF), $500,000
October 10, 2025

Information Networks: RESUME: Artificial Intelligence, Algorithms, and Optimization for Responsible Reopening
ARO Grant W911NF2110230, $99,821
October 10, 2025

Videos