Using AI to Defend Against Cyber Threats
By: Lujo Bauer and Vyas Sekar
The world is approaching a pivotal moment where advances in AI, critical infrastructure, and cybersecurity are rapidly converging. The U.S. has the opportunity to stay a step ahead of its adversaries, employing AI and automation to help prevent and and protect our critical infrastructure from attack. Researchers at Carnegie Mellon University’s CyLab are at the forefront of this work.
Why it matters: AI is already changing how cyber threats evolve. With increasing use of AI for code generation and workflow automation, our ability to quickly build and deploy new features far exceeds our ability to secure systems. At the same time as new cyber threats are emerging, we are relying more heavily on AI and autonomous systems within the nation’s critical industries and infrastructure, including energy, water, transportation and health and financial services.
Catch up quick: Existing mechanisms, protocols, and processes used to secure our critical infrastructures are based on a “human attacker”’ mindset. Today’s security operations rely on manually predefined rules that grant or deny access based on simple heuristic factors and human time scale responses. CMU researchers have observed the use of AI-driven autonomous capabilities for uncovering and exploiting vulnerabilities dramatically accelerating attacks. The threats to our critical infrastructure will increase significantly and our existing mechanisms are no longer sufficient. The U.S. needs to invest in autonomous cyber operations for defending critical infrastructures against future autonomous cyberthreats.
What we did: Researchers at Carnegie Mellon’s CyLab released two new groundbreaking studies on the use of AI for autonomous cyber operations — one focused on offensive capabilities, and the other on next-generation defense tactics.
- The research on offensive capabilities showed that when equipped with new abstractions, AI-driven red teams are able to autonomously execute complex multi-stage attacks against realistic networks in a matter of minutes, costing only a few tens of dollars.
- The research on defense tactics showed that deceptive strategies, if deployed correctly, can slow down and help defeat most of these AI-driven attackers.
- Together, the work lays a foundation for the use of AI-enabled systems in understanding and defending against sophisticated cyberattacks.
What we found: CMU CyLab experts found that it is important to build an open, extensible, and community-driven platform for benchmarking AI-driven attacks and defenses in realistic critical-infrastructure settings is both timely and critical. This includes:
- Autonomous red-teaming with AI: Right now, only big companies can afford to run professional “red team” tests on their networks via expensive human red teams, and they might only do that once or twice a year. In order to empower defenders with AI-assisted tools to autonomously catch problems before real attackers do, CMU researchers created a novel framework using modern AI models to autonomously plan and execute complex network red-team attacks.
- Cyber deception war gaming: As AI-based attackers become the norm, CMU researchers examined the effectiveness of cyber deception tactics to distract, detect, delay, and thwart attacks. CMU research shows how operators can proactively run a broad spectrum of deception war gaming scenarios to inform their future security posture.
The way forward: The growing sophistication and scale of cyber threats against our critical infrastructure necessitates a shift toward AI-enabled tools and systems to enhance U.S. cyber defense of our critical infrastructure. There is a need to:
- Develop national research and innovation strategy in autonomous cyber defense.
- Invest in research foundations to understand the capabilities and limits of frontier AI models as autonomous attackers.
- Create realistic testbeds, datasets, and benchmarks to rigorously evaluate the effectiveness of diverse defense strategies vs. diverse attack strategies.
- Evaluate mechanisms for transitioning these foundational advances through academic-industry-public sector partnerships for both open- and closed-door security evaluations.
What's next: CMU’s CyLab researchers are laying the groundwork for a broader research initiative leveraging AI-driven autonomous cyber operations for defending critical infrastructure. Researchers are creating community “leaderboards” for evaluating realistic AI-driven attack systems against realistic AI-driven defense systems on realistic infrastructure environments. Their work includes developing foundational algorithmic and systems capabilities that will enable users to set up realistic cyber ranges and to design, test, and deploy novel attack and defense strategies.
The bottom line: As AI continues to reshape the cybersecurity landscape, the US must support efforts across the nation to build the tools and infrastructure needed to evaluate, compare, and advance our capability to secure our critical infrastructure.