Skip to main content
Dean Krishnan testifies during a senate hearing.

CMU Dean Testifies Before Senate Subcommittee on the Need for Transparency in AI

Media Inquiries
Name
Peter Kerwin
Title
University Communications & Marketing

A Senate subcommittee on consumer protection, product safety and data security convened Sept. 12 in Washington, D.C., with the goal of identifying how to begin helping Americans understand the capabilities and limitations of artificial intelligence, reduce AI’s potential risks relative to consumers, and increase the public’s trust in AI systems through transparency. 
 
Ramayya Krishnan(opens in new window), dean of Carnegie Mellon University’s Heinz College of Information Systems and Public Policy(opens in new window) and founding faculty director of The Block Center for Technology and Society(opens in new window), which studies the responsible use of AI and the future of work, testified before the subcommittee on the need for greater accountability and transparency in the development and deployment of AI to spur its responsible adoption and use. 
 
“As AI technologies are considered for use in high-stakes applications such as autonomous vehicles, health care, recruiting and criminal justice, the unwillingness of the leading vendors to disclose the attributes and provenance of the data they have used to train and tune their models and the processes they have employed for model training and alignment to minimize the risk of toxic or harmful responses needs to be urgently addressed,” Krishnan said. 
 
He noted this lack of transparency “creates threats to privacy, security and the uncompensated use of intellectual property to copyrighted content, in addition to harms caused to individuals and communities due to biased and unreliable performance.”
 
Krishnan proposed four recommendations for Congress to provide near-term impact on the trusted adoption of AI and, when combined with a focused research program, will ensure U.S. leadership in responsible and trustworthy AI. 
 
First, he recommended that Congress should require all federal agencies to use the National Institute of Standards and Technology (NIST) AI Risk Management Framework(opens in new window) during the design, development, procurement, use and management of their AI use cases. 
 
“NIST was developed with multiple stakeholder input, and establishing it as a standard will have numerous benefits at home and abroad,” he said.
 
Next, he recommended that Congress should require standardized documentation for what he referred to as the AI pipeline, which consists of training data, models and applications, and that the documentation should be verifiable by a trusted third party, similar to how financial statements can be audited.
 
“Think of these as nutrition labels, so it’s clear what went into producing the model,” Krishnan said. “At the minimum, the label should document the sources and rights that the model developers have to be able to use the data that they did, and the structural analysis they’ve done to check for biases.”
 
In this vein, he suggested Congress should require a model validation report for AI systems deployed in high-stakes applications, akin to an “underwriters lab” report that objectively assesses the risk and performance of an AI system in these high-stakes applications.
 
In Krishnan’s third recommendation, which addressed content labeling and detection, he explained that generative AI has increased the capacity to create multimodal content that is indistinguishable from human-created output, and he said currently there is no standardized way to label the content.
 
“We need standards here, and Congress should require all AI models, open-source or otherwise, that produce content to be able to label their content with watermarking technology and provide a tool to detect the label,” he said. “While the usual concern about labeling is with regard to consumers, this is going to be equally important for model developers to know if the data that they’re using in their models is human-generated or AI-generated.”
 
Finally, Krishnan recommended investing in a trust infrastructure for AI that would connect vendors, catalog incidents, record vulnerabilities, test and verify models, and disseminate best practices, much like the Computer Emergency Response Team(opens in new window) (CERT) that was set up in the late 1980s in response to cybersecurity attacks. 
 
“This will go a long way toward improving our trust capability, especially since the technology is moving so quickly,” he said.
 
In closing, Krishnan said the success of his recommendations, in part, rests on a comprehensive approach to enhance AI skills across K-12 and community colleges as well as policies and strategies like wage insurance to address the impact of AI.
 
The hearing included Senator Maria Cantwell of Washington, chair of the Senate Committee on Commerce, Science and Transportation, and ranking members Senator Marsha Blackburn of Tennessee and Senator Ted Cruz of Texas. The subcommittee is chaired by Senator John Hickenlooper of Colorado. Victoria Espinel, CEO of BSA | The Software Alliance; Sam Gregory, executive director of WITNESS, an organization that helps people use video and technology to protect and defend human rights; and Rob Strayer, executive vice president of policy at the Information Technology Industry Council also testified.

AI on the Hill

four people sitting on one side of table

Dean Krishnan proposed four recommendations for Congress to provide near-term impact on the trusted adoption of AI and, when combined with a focused research program, will ensure U.S. leadership in responsible and trustworthy AI. 

Watch the hearing(opens in new window)

— Related Content —