The Software Engineering Institute(opens in new window) at Carnegie Mellon University today announced the formation of the Artificial Intelligence Security Incident Response Team(opens in new window) (AISIRT) to help ensure the safe and effective development and use of AI. AISIRT will analyze and respond to threats and security incidents emerging from advances in AI and machine learning (ML). The team will also lead research efforts in incident analysis and response and vulnerability mitigation involving AI and ML systems.
The rapid proliferation of AI has created a new class of software techniques for solving problems ranging from commonplace affairs to existential issues of national security. While these techniques can perform previously impossible feats, they also present enormous risks if deployed improperly or when deliberately misused. Safe and effective adoption of AI requires best practices for practitioners, coordination to identify and mitigate vulnerabilities, and a community of practice including academia, industry, and government organizations.
Led by the SEI, AISIRT will coordinate the work of a university-wide cadre of experts in cybersecurity, AI and ML to help assure the security and robustness of AI and ML platforms. The team will also support development of the security response capabilities called for in the Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence(opens in new window).
“The exponential growth of AI technologies and capabilities will continue, and every organization in the public, private and corporate sector that embraces these new tools is seeking reassurance that they are safe, reliable, and secure. This is where AISIRT and Carnegie Mellon University will really shine,” said President Farnam Jahanian(opens in new window). “Our record of partnering with organizations to fully leverage the potential of AI is unmatched and ever evolving, and we look forward to strengthening this record through AISIRT’s important contributions — which stand to benefit every sector of our economy and society — in years to come.”
AISIRT will focus on all sorts of AI systems from commerce and lifestyle platforms to, most importantly, critical infrastructure sectors, including defense and national security.
“AI and cybersecurity experts at the SEI are currently at work on AI- and ML-related vulnerabilities that, if left unaddressed, may be exploited by adversaries against national assets with potentially disastrous consequences,” said SEI Director and CEO Paul Nielsen. “Our research in this rapidly emerging discipline reinforces the need for a coordination center in the AI ecosystem to help engender trust and to support advancing the safe and responsible development and adoption of AI.”
The SEI brings decades of experience in threat modeling and vulnerability coordination to the analysis and management of AI vulnerabilities. Since 1988 the SEI’s CERT Coordination Center, the world’s first computer security incident response team, has become a central point for identifying and correcting vulnerabilities in computer systems. Now the SEI also spearheads the National AI Engineering Initiative,(opens in new window) and SEI experts are defining the practices that support the creation of robust, secure, scalable, and human-centered AI systems. AISIRT represents just one of the initiatives(opens in new window) underway at CMU to ensure the safety and reliability of AI with a focus on using AI for the betterment and advancement of society, while ensuring it is developed in an ethical, equitable, inclusive and responsible way.
In the same way that software vulnerabilities are reported to the CERT Coordination Center, researchers, developers and others who discover AI attacks or vulnerabilities in AI systems may report them to AISIRT by visiting https://kb.cert.org/vuls/report/(opens in new window).
For more information about AISIRT, visit https://www.sei.cmu.edu/go/aisirt(opens in new window), or contact @email.