By Reid Simmons

It’s August 2002 in Edmonton, Canada; I’m here for a robot competition. My team’s entry must register autonomously—which entails our robot navigating its way to the end of the line and waiting patiently for its turn to register. We watch, mortified, as a bug in our software causes the robot to miscalculate where the line ends and barge into the middle. The audience, however, loves it; many people later tell me that was the most “human” moment of the competition. Fast forward a few years to the deployment of the roboceptionist Valerie, which provides directions and snippets of its life story to people in Newell-Simon Hall. A faculty member tells me that Valerie failed the “f-you” test—he swore at the robot, and it didn’t react appropriately. We fix that problem, though on occasion we still find people typing novel swear words that Valerie is unprepared to handle.

I’ve spent nearly a decade working on “social” robots that try to adhere to human norms, both in conversation and navigation. The aim is to make interactions with robots more comfortable for people by making them more predictable—having the robots act in ways similar to those of humans. People often ask me why this is important; after all, we do not expect other technology, such as cars or microwaves, to behave socially. My response is twofold: First, robots likely will be the most complex technology with which people will interact. Having to learn a new way to interact with them would be a burden. Second, people apparently want to treat robots as social entities. For instance, many people type “hello” and “goodbye” to Valerie and thank the roboceptionist for answering their questions. Moreover, we’ve done experiments showing that people engage more with robots that have expressive faces, behave differently toward a robot depending on the mood it’s projecting, and prefer robots that maintain personal space while passing people. Others have shown that people exercise more with robots that match their own personalities and prefer robots that acknowledge their mistakes.

What makes the problem so difficult for researchers is getting robots to understand the complexities of human behavior. For example, most of us are taught to wait courteously until everyone exits an elevator before stepping on. But that isn’t really the rule—it’s actually waiting for everyone who intends to exit at that floor. Now, for a robot to follow that etiquette, it must be able to infer people’s intentions. We humans do that effortlessly through analyzing glances, expressions, postures, and gestures. Getting a robot to perform that intricate exchange and respond correctly is hard!

Recently, I was quoted in National Geographic, saying that robots will routinely function in human environments within five to 10 years. Although I was being optimistic in that quotation, I do believe that it’s just a matter of time before we have robots interacting with us on a daily basis, on our own level. So, if you encounter a robot trying to make eye contact, remember that it’s trying its best to fit into our society. And try not to swear at it if it makes a faux pas.


Reid Simmons is a research professor and associate director for education for Carnegie Mellon’s Robotics Institute. His research interests focus on developing autonomous systems (especially mobile robots) that operate in rich, uncertain environments.