Passing for Human: Rhetoric, Software Agency, and Communicatively Competent Machines
Author: Sean Zdenek
Degree: Ph.D. in Rhetoric, Carnegie Mellon University, 2001
This study is a rhetorical analysis of conversational software agents and software agency. These agents are designed to interface with humans in more "natural" ways by simulating and producing humanlike communicative and linguistic competence. For example, agents are designed to exhibit (or interpreted as exhibiting) any number of the following: gender, race, ethnicity, context-appropriate language use, agency, autonomy, nationality, a body, a humanlike voice, emotion, personality, sexuality, gesture, and gaze. This study follows in the tradition of the rhetoric of science and technology by analyzing the discursive resources through which software agents come to be perceived as humanlike interactional partners. In four case studies of conversational agents, I explore the assumptions about humans and humanness that inform the ongoing practice of doing technology. In short, I argue for a more rhetorically, linguistically, and socially sensitive definition of what it means for a software agent to act human by exploring 1) the ways in which designers, users, and other interested parties represent and interact with software agents as certain kinds of raced, gendered, and/or classed entities; and 2) the ways in which certain assumptions about "conversation"--for example, the assumption that conversation is a casual interaction between two strangers who share the conversational floor equally-are offered as unproblematic and work to constrain the agent design process.
In the first half of this study ("Durable But Problematic Bots"), I analyze some of the assumptions about "women" that inform the design and interpretation of two software agents, and argue that some software agents simulate and produce "humanness" stereotypically. In case one, I analyze a well-known proto-agent called "Julia" and show how its pre-programmed set of stereotypical utterances (e.g. "I have PMS today") work to reconcile its conversational inabilities with a certain view of woman as irrational or less than human. In case two, I analyze electronic mailing list discourse about an embodied agent called "Sylvie" and show how users and designers gender the agent but also locate it as an object to be acted upon. To the extent that Sylvie is represented as a wild territory, a thing to be tamed, it is another example of what Suzanne Romaine calls a "leak" in the natural gender system of the English language.
In the second half of this study ("Language and Social Context"), I analyze the assumptions about language, interactivity, and face-to-face communication that inform the design of an "embodied" agent system called "Rea" and a variant of the Turing Test called the Loebner Competition. In case three, I examine the conversational model that one prominent software agent research group at MIT has offered as "natural" or representative of face-to-face communication. This group's model assumes that face-to-face conversation can be defined unproblematically in terms of a relationship between two strangers who share the conversational floor equally. I attempt to show how this definition reduces conversation to an exchange of facts because it evacuates from the context of interaction the extent to which participants respond dynamically to variations in discourse function, formality level, social distance/solidarity, and their own relative degrees of power and status (Holmes 1992). In case four, I focus exclusively on one dimension of social context, discourse function, in order to show how the Turing test avoids the question of function by privileging the idea of communicative competence as transferable and de-contextualized. Through discourse analyses of human-machine interactions, I attempt to show in the second half of this study why it is important to consider the social context of language use.