Carnegie Mellon University

Image of woman looking at computer screen

August 23, 2017

Researchers Consider Ways To Add Fairness in Automated Systems

By Vidya Palepu

Daniel Tkacik
  • College of Engineering
  • 412-268-1187

Anupam Datta, associate professor of electrical and computer engineering at Carnegie Mellon University, is leading a $3 million National Science Foundation-funded project to improve automated decision-making systems, which affect everything from online advertising and health care industries to criminal justice.

"A key innovation of the project is to automatically account for why an automated system with artificial intelligence components exhibits behavior that is problematic for privacy or fairness," said Datta, who is based at CMU's Silicon Valley Campus and is a part of CyLab, CMU's Security and Privacy Institute. "These explanations then inform fixes to the system to avoid future violations."

The team includes Matthew Fredrikson, assistant professor of computer science at CMU; Ole Mengshoel, principal systems scientist in electrical and computer engineering at CMU; Helen Nissenbaum, professor of information science at New York University; Thomas Ristenpart, associate professor of computer science at Cornell University; and Michael C. Tschantz, senior researcher at the International Computer Science Institute in Berkeley, California, and who received his Ph.D. from CMU in computer science in 2012.

When using machine learning and artificial intelligence, Mengshoel said defining what privacy and fairness means for a system can be a challenge.

"But doing so is critical," Mengshoel said, "since these methods are increasingly used to power automated decision systems."

Datta and Tschantz previously conducted research showing significantly fewer women than men were shown online ads promising them help getting jobs paying more than $200,000, raising questions about the fairness of targeting ads online.

Fredrickson said the current project also will work on ways to balance intellectual property rights and the privacy of users.

"This project will be a great opportunity to ... improve machine learning to be more privacy friendly," Fredrickson said.