Reshmi Ghosh
Boosting AI’s integrity
Senior Applied Scientist and Technical Lead, Microsoft
Possession of artificial intelligence tools by villainous sleeper agents may sound like a movie plot, but it’s a real possibility that people like data scientist Reshmi Ghosh (ENG 2017, 2020, 2021) work to prevent every day.
“Security and safety risks around generative AI systems for search and agents are continuously evolving,” she says. “We develop safety filters to detect and prevent data exfiltration attempts, which happen when malicious instructions are hidden in a third-party document that’s uploaded with a prompt.”
Of course, Microsoft realized this Trojan horse vulnerability long ago. As part of the MSAI Responsible AI team, Reshmi pioneered Azure Prompt Shields, a machine-learning model supporting safe and reliable prompting.
The safety models that Reshmi built run automatically in the backend, much like a guardian angel and bouncer within the AI inner monologue. Should a malicious prompt slip through, it’s recognized, neutralized and tossed out before any mischief happens.
The benefit to millions of Microsoft users is knowing that information from emails, drives and documents can’t be mined and redirected with these protections in place, thanks to data integrity.
Reshmi’s job is among the most coveted in the AI space. She joined Microsoft as one of about 20 members of the company’s annual AI Responsibility incubator, selected from a pool of almost 20,000 applicants. After rotations throughout the company, she landed in her present role focused on improving Copilot’s search and conversation capabilities.
She also mentors students in the Boston area who study AI, adding industry perspective to what they’re learning from professors.
“I love to see students grow as I have been a student for so long, so it’s a community that’s important to me and I want to see AI learning thrive,” she says.
Story by Elizabeth Speed