Carnegie Mellon University
January 08, 2024

The Ethics of AI: Building Systems That Benefit Society

By Stacy Kish

In the 2010s, Americans invited new friends into their homes: Siri and Alexa. These personal intelligent agents (PIAs) use algorithms to adapt to users’ preferences and exhibit human-like characteristics to ease integration into daily life. 

While Siri and Alexa were at the leading edge of the artificial intelligence (AI) revolution, the evolution of this technology opens some big questions about how AI will actually benefit society. 

“How many people bought an Alexa and don’t use it anymore?,” said Alex John London, the K&L Gates Professor of Ethics and Computational Technologies in the Department of Philosophy at Carnegie Mellon University. “It turned out that the most immediate functionalities weren’t that useful for people. It is difficult to do things that are really beneficial.”

London teamed up Hoda Heidari, the K&L Gates Career Development Professor in Ethics and Computational Technologies in the School of Computer Science at CMU, to model what sounds like a simple question — what does it take for an AI system to benefit a user and what are the moral pitfalls that can occur when these conditions for benefit are not met?  Their findings are published on Arxiv, a preprint server for articles published prior to peer review.

In their study, London and Heidari present a framework to build technology that is beneficent—“doing good”—by design and puts the human at the center, helping a person live a life that expresses their considered goals and values. This opens a big and complicated question for researchers on how to do just that. 

“This is a deep, humanistic paradox,” said London. “Everyone wants the good life but there is no single recipe for a good life, and people have broad disagreements and variation in how they spend their time.”

According to London, without a good sense of what the “human good” actually is, developers often adopt a proxy to make AI systems work. But in some cases, a system can satisfy these proxy conditions while providing benefits to users that are trivial or, in some cases, while actually harming users. Determining who is benefitting from the new system complicates matters. The concern today is that AI systems will benefit companies and generate profits but not help individual users or society at large.

“The fact that people have different conceptions of the 'good life' is precisely what our work draws attention to,” said Heidari. “We encourage developers and creators of AI to understand the values and life plans of their targeted users and those potentially impacted by their AI products---starting from the early stages of design of new AI systems.”

In their work, London and Heidari shift the focus from making AI products that advance a company’s goals to advancing the goals of the individual. Using this approach, they have formalized a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit and avoid the pitfalls of deception, paternalism, coercion, exploitation and domination. 

“We make numerous value-laden decisions through the lifecycle of ideating, designing, developing and using AI systems,” said Heidari. “Those decision points are the way in which we bake our values into AI algorithms.

During this process, London and Heidari show how beneficence is connected to concepts such as basic rights, moral freedom and individual well-being. Their work connects these concepts to a host of morally problematic AI–human interactions. This work will expand the dictionary of morally salient issues to consider in conversations about responsible and ethical AI.

Their work draws attention to how the principles of fairness, accountability, transparency, safety and reliability, security and privacy become the focus after the AI system has been built. They contend that other principles, such as beneficence, must be front and center in earlier stages of ideation and design. 

According to London, this new approach creates a connection between the values that dictate design and how people use these systems. It also bridges the concerns between individuals and the long-term impact of current and future AI systems.

“We are trying to change the set of concerns to be more aligned so humans and AI can interact [in a way] that upholds and supports the humanity of people,” said London. “It is less about making people vulnerable to intrusion by other people.” 

Moving forward, London and Heidari will examine how people who develop AI algorithms include these ethical considerations into future designs. This new approach to AI could benefit individuals and larger groups and connect justice and fairness into designs.

“We hope to turn this conceptual framework into effective educational material for AI designers and developers interested in the responsible and ethical use of AI,” said Heidari.

London received a National Science Foundation grant and Heidari received funding from a PwC grant through the Digital Transformation and Innovation Center at CMU. In addition, London is also part of a larger NSF-funded project, AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups (AI-CARING), led by Georgia Tech that is at the cutting-edge of addressing ethical issues in AI-driven technology.

London and Heidari plan to run workshops to disseminate the framework, apply it to ongoing AI projects and measure the efficacy of the approach to address ethical issues as a part of AI-CARING and AI Institute for Societal Decision Making.