Carnegie Mellon University

tech

September 26, 2019

Tepper Ethics Lunch Series: Christoph Lütge Deconstructs the Ethics of AI

When Christoph Lütge thinks about applying business ethics to artificial intelligence, he often recalls renowned physicist Stephen Hawking’s famous warning: “The rise of AI could be the worst or the best thing that has happened for humanity.”

But often, people only remember half of that sentence, and focus on AI’s dangers, Lütge noted. What they forget is how AI can enhance or extend lives. An integral part of that equation, he believes, is ensuring that machines are programmed to behave ethically. 

Lütge, the director of the Institute for Ethics in Artificial Intelligence at the Technical University of Munich, shared his thoughts on how to achieve that goal as part of the Tepper Ethics Lunch Series in a talk titled “Humanity’s Last Chance or Highway to Hell? Ethical Opportunities and Challenges of Artificial Intelligence.”

While artificial intelligence has been around for years, it is starting to become more noticeable in many people’s everyday lives, thanks to applications such as recommendations on retail platforms or virtual assistants such as Apple’s Siri and Amazon’s Alexa. That visibility has stoked a new wave of concerns about the potential for AI to go awry.

“For me, ethics is not just about what might be bad, but also what is good? Maybe there is a win-win situation for the company,” Lütge said. Artificial intelligence systems “solve old problems, and improve on old issues, and they introduce some new problems.”

For example, AI can help doctors use telemedicine to provide better health care to people in remote areas. Smart grids can address the increasing global energy demand. Algorithms can create more accurate cancer diagnoses. Chatbots help women in socially restrictive countries address taboo medical topics. In these ways, AI systems can enhance human agency and increase societal capabilities, Lütge said. On the flip side, they can also increase vulnerability to cyberattacks, loss of privacy, or falling prey to technical errors.

An important component of introducing business ethics to artificial intelligence is by showing how ethical risks for companies can eventually become economic risks. A system could damage the company’s brand, or force it to pay penalties, Lütge explained; a code of ethics could serve as an early-warning system that steers the business clear of such problems.

Tae Wan Kim, associate professor of business ethics, agreed.

“We have to offer something,” he said in a joint interview prior to Lütge’s presentation. An algorithm that creates efficiency but also offers fairness is an example of how a company would benefit from AI infused with ethics, he noted. 

Likewise, Lütge pointed out that AI requires more public trust, which ethical frameworks could help establish.

“The key will be for the technology to gain the necessary societal acceptance; otherwise, it will not be used,” he said.