Carnegie Mellon University

human hand and robot hand

August 31, 2020

Robots Should Replace Beasts of Burden, Not Humans, Tepper Faculty Argues

Contrary to current trends promoted by the media, human-like robots are less desirable than nonhuman-like robots because they tempt us to replace complicated human relationships with deficient robot relationships, according to a new chapter authored by researchers from the Tepper School of Business.

“It is neither necessary nor desirable to build workplace robots that look and act like people,” says John Hooker, T. Jerome Holleran Professor of Business Ethics and Social Responsibility and Professor of Operations Research at Carnegie Mellon’s Tepper School of Business.

Hooker and Tae Wan Kim, an Associate Professor of Business Ethics, co-authored the chapter, “Humanizing Business in the Age of Artificial Intelligence,” which appears in Humanizing Business: What Humanities Can Say to Business, edited by Ed Freeman, Michel Dion, and Sergiy Dmytriyev.

Hooker and Kim explain that humans, especially in western society, have a long history of anthropomorphizing animals and this tendency has reappeared in how we relate to AI systems. For example, Boomer the battlefield robot, an unsophisticated robot that did not interact with soldiers, received an elaborate funeral with a 21-gun salute after it was destroyed while disarming explosives in Iraq.

There are many other examples of people humanizing robots that are mentioned in the chapter, from the Canadian Broadcasting Corporation throwing a retirement party for five mail robots to senior citizens with dementia becoming attached to an AI device designed to look like a seal. However, Hooker and Kim argue that this behavior is merely an extension of how humans have always related to nonhuman intelligent beings, including hunting dogs, beasts of burden, and spiritual beings. It was not until the advent of the industrial age that people, particularly in northern Europe, began keeping animals as pets.

In a sense, machines have replaced beasts as nonhuman intelligent beings that support particular tasks; and therefore we can learn how to interact with robots based on the rituals our ancestors engaged in when training donkeys, horses, and other beasts of burden. The authors explain that by training animals, humans become more aware of their intellectual and emotional limitations. This leads to one of the suggestions made in the chapter, which is to involve workers in the training of robots. As they experience the restrictions of machine learning, workers will have a harder time anthropomorphizing robots.

Another suggestion is to avoid giving robots a humanlike appearance as it is “a needless nod to old science fiction movies” and can exacerbate the experience of working with a robot when it fails to competently perform human-like tasks. The authors also suggest that companies perform specific rituals when onboarding a robot that contrast with the rituals performed when they onboard new employees. For example, a new coworker might receive a welcome breakfast, but a robot should receive a different, more appropriate welcome in order to demonstrate to people how they should interact with the machine.

While the authors believe that anthropomorphizing robots is harmful to humans because it encourages us to reject more complicated and subtle interactions with other humans in favor of simplistic encounters with machines, they do not believe the solution is to avoid robots. Instead, the chapter concludes that humans have always drawn boundaries between our species and other nonhuman intelligent beings, and by studying our behavior in the past, we can map out a plan for interacting with robots in the present and future.

“An octopus is remarkably intelligent but inspires no one to anthropomorphize it in the slightest… We can make machines as intelligent as we want, with respect to the task at hand, without making them any more humanlike than work animals,” said Hooker.