Carnegie Mellon University

Digital brain imagery

April 23, 2020

New Framework Helps Consumers Understand Machine Learning Algorithms

Mara Falk
  • Director of Communications and Media Relations
  • 412-268-3486

Machine learning algorithms are rapidly gaining important roles in decisions by consumers and managers. The algorithms, a subset of artificial intelligence, allow computer systems to carry out tasks by relying on patterns and inference instead of explicit instructions. But the systems behind the algorithms are often not well understood by users.

A new study developed a framework that defines characteristics to help explain machine learning algorithms. The framework is based on pragmatic theories of explanation in the philosophy of science that argue that a good explanation depends on the goals of the recipient of the explanation. One example of how the framework may be used was illustrated using an experiment that measured the impact of different factors on people’s perceptions of understanding, use, and trust of AI systems.

The study, by researchers at Carnegie Mellon University, has been accepted by the Artificial Intelligence, Ethics, and Society conference hosted by the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery.

Understanding Machine Learning Algorithms

“Our framework provides a concrete guide for managers on how to present information about machine learning algorithms to foster trust and encourage use,” explains Joy Lu, Assistant Professor of Marketing at CMU’s Tepper School of Business, who led the study. “Our work also provides information to explain the algorithms so researchers and engineers working on AI can design, develop, and use algorithms more effectively.”

Across many domains, including finance, medicine, and law, algorithms are widely used today (e.g., for approval of credit cards and loans or in detecting diseases). But because the most successful algorithms are not transparent, users find it challenging to understand the algorithms’ output and why they make the predictions they do. In some cases, the use of algorithms has been harmful (e.g., an algorithm used to make decisions about parole and bail was systematically biased against black defendants).

To ensure that algorithms work well and are more easily understood by users, researchers sought to identify a good explanation for machine learning output. First, they developed a framework for the features of an explanation that might be relevant, incorporating psychological, ethical, and computer science perspectives. Then they carried out a large-scale lab experiment examining how consumers responded to explanations of hypothetical credit loan approvals to demonstrate how to adapt the framework to a real-world context.

Researchers used the German Credit Data from the University of California Irvine Machine Learning Repository, which consists of 1,000 individual profiles characterized as good or bad credit risks. The researchers recruited 1,205 adults in the United States on Amazon’s Mechanical Turk platform to ask them questions based on this data. The participants were instructed to imagine that they were applying for credit loan approval from a regional bank to purchase a new car, and that the bank used an algorithm to determine whether to approve or reject the loan application.

Participants were randomly assigned to a condition in which their loan application was approved or rejected, and given an explanation that varied in six different ways in the information it provided (e.g., how employment, credit duration, or installment rate affected the decision) or were given no explanation and simply told the outcome of the loan decision. After hearing the explanation, participants were asked to rate the explanation (i.e., the algorithm) on understanding, intuitiveness, fairness, and satisfaction.

Explanation Softens Negative Outcomes

“We sought to determine how the different parts of an explanation of machine learning algorithms might affect users’ perceptions of understanding of the process,” explains Dokyun (DK) Lee, Assistant Professor of Business Analytics at CMU, who co-authored the study. “The experiment yielded both intuitive and counter-intuitive results, which opens several avenues for further testing and theory development.”

Specifically, the study found that:

  • Participants whose credit loans were approved rated the explanation more positively than participants whose loans were rejected, regardless of the explanation type. When outcomes are positive, the researchers surmised, recipients may not care about explanations, which suggests that firms should focus on improving explanations when there are negative outcomes.
  • Among participants whose credit loans were accepted, those who were shown the decision tree algorithm — one of the most popular algorithms used in machine learning — rated the explanation less positively than those who were not shown any explanation. One reason why this occurred is that visualization of the decision tree may give the impression of an overly simplistic system, while participants expect the firm to implement a more complex or sophisticated algorithm.
  • Participants whose credit loans were rejected did not rate the intuitiveness and fairness of the neural network algorithm explanation more positively (relative to no explanation) unless both a global explanation (a rationale for the population average) and local explanation (a specific explanation for the individual participant) were offered. This result suggests that there may be added value to providing consumers with both a big-picture overview and a personalized explanation.

“For managers, our framework can be used to uncover the best way to present algorithmic prediction rationales to users to foster trust and adoption,” adds Tae Wan Kim, Associate Professor of Business Ethics at CMU, who also co-authored the study. “For researchers who study AI, we provide dimensions of explanations that can be considered in devising algorithms. And for business and social science researchers, our framework could foster more investigation of different factors that affect the quality of explanations.”

The research was funded by the Carnegie Bosch Institute.

###

Summarized from an article in AIES Conference, “Good Explanation for Algorithmic Transparency” by Lu, J (Carnegie Mellon University), Lee, DK (Carnegie Mellon University), Kim, TW (Carnegie Mellon University), and Danks, D (Carnegie Mellon University). Copyright 2019 Association for the Advancement of Artificial Intelligence. All rights reserved.