Carnegie Mellon University has been at the epicenter of artificial intelligence from the creation of the first AI computer program in 1956 to pioneering work in self-driving cars and natural language processing. So it only makes sense that CMU experts would be at the forefront of advising national decision-makers on the fast-paced changes taking place in the field.
This summer, CMU faculty and leaders conducted AI policy briefings in Washington, accepting invitations from key federal agencies and Congressional committees and offices to discuss how the U.S. can continue to innovate and lead in the AI space. And more are underway this fall.
Amid growing concerns about the rapid development of AI platforms and tools and even talk of the existential threat AI may pose, federal officials need to get a balanced assessment of the facts, recognizing the legitimate threats posed by AI (job losses and racial bias) as well as the benefits of a transformative technology that can support humanity. That’s where CMU comes in. With a focus on using AI for the betterment and advancement of society, while ensuring it is developed in an ethical, equitable, inclusive and responsible way, CMU experts are well-positioned to engage with federal agencies, members of Congress and their staffs on where we are with this technology and how to best move forward.
A central theme of many of these high-level conversations is the importance of effectively managing the creation of AI tools, while recognizing best practices for AI safety that promote consumer trust and protection. The dialogue prompted a significant proposal co-authored by Ramayya Krishnan(opens in new window), dean of CMU’s Heinz College of Information Systems and Public Policy, and Martial Hebert(opens in new window), dean of CMU’s School of Computer Science.
Writing in The Hill last July(opens in new window), Krishnan and Hebert advocated the creation of a federal AI Lead Rapid Response Team (ALRT) to address the uncertainties of AI by tracking emerging technologies, sharing best practices to ensure consistent approaches, and developing a system to test and verify the efficacy of new AI technologies and applications. “With this unified mission and federal funding, ALRT would form a proven industry and academia partnership, leveraging proprietary information in a trusted manner to combat the uncertainties of AI,” they said.
Their plan is modeled on the pioneering work done in the mid-1980s to address cybersecurity concerns during the dawn of the internet age. At that time, the government formed the Computer Emergency Response Team (CERT) at CMU, bringing together government, industry and academia to better prepare computer systems for potential cybersecurity threats.
According to Krishnan and Hebert, using that approach to confront the rapidly changing development and deployment of AI tools would be a major step toward establishing essential guardrails while ensuring American leadership and competitiveness in the industry.
“This has to be a consensus-building activity. We should all play our roles in making sure we have the right processes around the technology, (and) the right governance ecosystem around it." — Hoda Heidari
The conversation continues on Capitol Hill this fall, as CMU experts testify in public hearings and provide briefings to members of Congress.
Among the engagements set for September:
Krishnan testifies before the Senate Commerce Committee.
Rayid Ghani,(opens in new window) Distinguished Career Professor in the School of Computer Science’s Machine Learning Department(opens in new window) and the Heinz College, testifies before the Senate Homeland Security Committee.
Aarti Singh(opens in new window), a professor in the Machine Learning Department and director of the National Science Foundation AI Institute for Societal Decision Making(opens in new window), meets with members of Congress and their staffs as part of the National AI Research Institutes Congressional Showcase.
Krishnan and faculty members from the Block Center for Technology and Society(opens in new window) brief coalitions and caucuses from both sides of the aisle.
Hoda Heidari(opens in new window), the K&L Gates Career Development Assistant Professor in Ethics and Computational Technologies in the Machine Learning Department and the Software and Societal Systems Department, is one of the Block Center faculty taking part. While speaking to Bill Flanagan, host of “Our Region’s Business,” at the K&L Gates Conference on Ethics and Artificial Intelligence at CMU earlier this summer, she emphasized the importance of engaging members of Congress on these issues.
“I’m actually very thrilled to see the recent amount of activity around AI governance and accountability…I think all these stakeholder groups are slowly realizing that if we do this right, the appropriate governance around the technology is going to help with innovation and with economic growth.” She added, “This has to be a consensus-building activity. We should all play our roles in making sure we have the right processes around the technology, (and) the right governance ecosystem around it. It has to be a partnership.”
Leaders and staff from the White House Office of Science and Technology Policy, the National Security Council, the U.S. Department of Commerce, the U.S. Department of Justice, the FBI and Department of Homeland Security all heard from various CMU experts this summer about the current state of AI research, development and demonstration activities, as well as efforts to prepare the U.S. workforce for AI integration.
The list of Congressional briefings included:
The Senate and House AI Caucuses
House Commerce and Justice Science Appropriations staff
House Science Committee Majority Staff
House Armed Services Committee Majority Staff and Member
Senate Commerce Minority Staff
Senate Energy and Natural Resources Staff
Senators Schumer, Booker, Casey, Heinrich Staffs
Republican Policy Committee