
INI Faculty Share Cybersecurity Insights In Honor of Cybersecurity Awareness Month
By Evan Lybrand
Media InquiriesOctober is Cybersecurity Awareness Month, an acknowledgement of the ever-present significance of cybersecurity in the United States. The Cybersecurity and Infrastructure Security Agency (CISA) has chosen “Building a Cyber Strong America” as the theme for 2025, underscoring the importance of cybersecurity for not only national security efforts but also for businesses, state and local governments and individuals.
We asked some of our faculty to share their insights on the current state of cybersecurity and how they see the field evolving. Their perspectives range from the intersection of Artificial Intelligence (AI) and cybersecurity to the advances of quantum computing to the role that everyday people play in maintaining a secure ecosystem.
Taha Khan is the most recent addition to the INI faculty as an associate teaching professor. Some of his research interests focus on computer security, usable security, online privacy and internet freedom. Currently he is teaching Introduction to Computer Systems and will be teaching a new course in the spring about the technologies and policies of information control and censorship.
“I’m genuinely excited about AI’s growing potential, especially in how it can be used for cybersecurity education,” said Khan. “However, I’m also concerned about the rapid adoption of these systems, often without fully understanding their security implications. We’re still figuring out how they work and what their limitations are, yet they're already being widely deployed, even in some critical operations. The benefits are amazing, but the pace of implementation without a clear grasp of the risks is slightly troubling.
“Another area I see growing is augmented reality, which also has me both excited and concerned. The technology is incredible and opens up amazing possibilities, but we're facing a major challenge around privacy that isn't getting enough attention. We’re moving from a world where you could at least tell when someone was recording you in public to one where Augmented Reality (AR) glasses make surveillance practically invisible. It’s a fundamental shift that’s happening faster than our privacy frameworks can adapt.”
Michael Mattarock is the INI associate director and has a background in cybersecurity, AI and national security. Mattarock currently teaches Cyber Risk Modeling and Cyber Law and Ethics, in addition to his responsibilities as associate director.
“One of the toughest challenges in cybersecurity right now is how quickly the landscape is changing, especially with the idea of a ‘quantum day’ striking (when quantum computers break the encryption we rely on today),” said Mattarock. “At the same time, AI is becoming central to defense, but we’re still figuring out how much to trust its outputs since uncertainty quantification doesn’t consistently provide measures of reassurance.
“What excites me most is the progress in making AI more reliable through things like probabilistic reasoning and stronger verification methods as we engineer more secure AI solutions. The field is heading toward a future where success isn’t just about stopping attacks but building systems that can adapt and stay resilient as new, unexpected threats — whether from AI, supply chains or quantum breakthroughs — emerge. We’re lucky to be doing much of this here at Carnegie Mellon University (CMU).”
David Varodayan, an associate teaching professor, is heavily involved in the college-wide initiative to create the country’s first suite of AI engineering graduate degrees. Varodayan helped to design and continues to teach some of the courses in the M.S. in Artificial Intelligence Engineering - Information Security (MSAIE-IS) program.
“Our lives will be increasingly supported by AI assistants: helpful tools that write emails, summarize documents and manage our schedules,” said Varodayan. “These systems, designed to act on our behalf, introduce a new risk: indirect prompt injection. An attacker can hide a malicious command in a webpage, email or document.
“When your AI assistant processes that data, it unknowingly executes the command, which could lead it to send unauthorized messages or expose sensitive information. The key mitigation is to carefully manage your AI assistant’s permissions, ensuring it can only access the resources it truly needs.”
Hanan Hibshi is an assistant teaching professor and teaches many of the information security-focused courses at the INI including Introduction to Information Security and Browser Security.
“I am excited to see how the introduction of Artificial Intelligence (AI) in our daily lives is influencing our need for more cybersecurity protections,” said Hibshi. “We are now challenged to look into new threats, think about new defenses and design new training models for humans to be able to assess when the AI is correct or not, while still dealing with existing threats that continue to impact AI systems.”
