Skip to main content
Gov. Shapiro visits Carnegie Mellon.
Gov. Josh Shapiro signed an executive order on commonwealth use of generative artificial intelligence.

CMU Advances Trustworthy AI as Part of AI Safety Institute Consortium

Media Inquiries
Name
Peter Kerwin
Title
University Communications & Marketing

Carnegie Mellon University is joining some of the nation’s leading artificial intelligence (AI) stakeholders in a new U.S. Department of Commerce initiative to support the development and deployment of trustworthy and safe AI. The department’s National Institute of Standards and Technology (NIST) has established the U.S. AI Safety Institute Consortium (AISIC) to bring together AI creators and users, academics, government and industry researchers, and civil society organizations to meet this mission.

"To maximize AI's potential, we need multidisciplinary research and innovation to make AI safe, trustworthy and reliable," said Ramayya Krishnan(opens in new window), dean of the Heinz College of Information Systems and Public Policy, faculty director of the Block Center for Technology and Society(opens in new window), and a member of the U.S. Department of Commerce’s National Artificial Intelligence Advisory Committee(opens in new window). "The consortium housed in the AI Safety Institute provides the platform for these conversations and will be an important resource for researchers and practitioners alike to advance safe AI.”

AISIC includes more than 200 member companies and organizations (opens in new window)that are on the frontlines of developing and using AI systems, as well as academic teams building the foundational understanding of how AI can and will transform our society. These entities represent the nation’s largest companies and innovative startups; creators of the world’s most advanced AI systems and hardware; key members of civil society and the academic community; and representatives of professions with deep engagement in AI’s use today. The consortium also includes state and local governments, as well as nonprofits.

Because the advance of AI promises enormous potential, but also introduces new and dangerous risks, CMU experts across the university are focused on ensuring the safe and responsible development and use of AI. The Software Engineering Institute (SEI) recently developed the first Artificial Intelligence Security Incident Response Team(opens in new window) (AISIRT) to fill the need for a capability that can identify, analyze and respond to the threats that emerge from advances in AI and machine learning. SEI also coordinates a new Center for Calibrated Trust Measurement and Evaluation initiative, a pilot with the Department of Defense, which helps the U.S. military assess the trustworthiness of AI systems.

Just last fall, CMU’s Block Center, one of the nation’s leading research centers working to shape the impact of generative AI tools and platforms, announced a collaborative partnership with Pennsylvania Gov. Josh Shapiro to promote the responsible use of AI by the commonwealth’s government agencies. Faculty experts provide advisory support for a Generative AI Governance Board to guide commonwealth policy, use and deployment of AI, while offering additional research support on generative AI usage.

"Our scholars, scientists, humanists and artists are not only advancing the frontiers of science and technology, but are building the foundational knowledge of how to realize AI’s extraordinary potential while mitigating its risk." — Theresa Mayer

The Block Center’s Responsible AI Initiative has also worked closely with NIST to operationalize the NIST AI Risk Management Framework, a broad collaboration between public and private sectors that provides guidelines to better manage the potential risks of AI systems at all levels of society.

“As pioneers in artificial intelligence, Carnegie Mellon University has led the way in harnessing this transformational technology for the public good,” said Theresa Mayer(opens in new window), CMU’s vice president for research. “Our scholars, scientists, humanists and artists are not only advancing the frontiers of science and technology, but are building the foundational knowledge of how to realize AI’s extraordinary potential while mitigating its risk. Our participation in AISIC reflects that strength, as we collaborate with our federal partners to ensure a human-centered focus in promoting the development of safe, reliable and responsible AI tools and platforms.”

“Evaluating and certifying AI are key obstacles to their real-world deployment in safety critical applications. Knowing which AI tool to use and instilling confidence in these tools are essential steps to creating trust and acceptance in this technology,” said Martial Hebert(opens in new window), dean of CMU’s School of Computer Science. “The U.S. AI Safety Institute Consortium will directly address this challenge through a team spanning engineering, computer science and public policy. CMU‘s multidisciplinary research and experience in building safe, reliable, human-centered AI technologies will be an important component of this effort. We are building on existing federal and state partnerships and a rich history of research and development around AI.”

"Before engineered systems that employ AI are deployed, it is critical that they be proven to be safe and trustworthy. Building on decades of world leadership in creating dependable systems, Carnegie Mellon is uniquely positioned to develop methods to assess and certify the safety and trustworthiness of AI-orchestrated systems, ensuring that they do what they are intended to do, and nothing else," said Bill Sanders(opens in new window), Strecker Dean of the College of Engineering. "We look forward to working closely with the federal government and others to avoid the potential harm to society that could come from the deployment of systems that are not known to be safe and trustworthy, thus realizing the full potential of AI to transform the engineering discipline for real and enduring good."

— Related Content —