How are AI-embedded Products Changing the Way We Think?
Society has been able to integrate AI into our daily lives as a tool to study, work, or assist us as we move through the world. However, as much as it impacts the world, some wonder how much AI in turn impacts individuals: in what ways are we affected by artificial intelligence?
Carnegie Mellon University professors Susanna Zlotnikov and Dr. Arthur Sugden discussed the interaction between the design of AI-embedded products and human psychology, and how it affects our behavior.
How AI Really Compares to the Human Brain
“When we think about AI, we usually think about large language models. They're designed only to predict the next syllable. That's it.”
—Dr. Arthur Sugden
The artificial intelligence we are familiar with is designed to imitate the human brain and is referred to as neural networks. Certain types of neural networks can be more specifically categorized as large language models, as their main function is to predict language.
These large language models are what the general public commonly refers to when they use AI-embedded products like ChatGPT, which is essentially an AI chatbot that has a conversational user interface (UI) and responds to prompts and questions.
According to Dr. Sugden, these AI-embedded products take an incredible amount of energy and data to train, and even then, they are still only approximations of what a human mind can compute. ChatGPT required 50 gigawatt-hours to train, and in comparison, an average 30-year-old adult’s lifetime of knowledge is equivalent to 25 megawatt-hours of “training.”
Another way to view AI in relation to human effort is to compare the amount of time it would take for a human to answer the same question posed to a chatbot. In Dr. Sugden’s analogy, for AI to answer a question would be akin to twelve and a half minutes of human time—showing that although these neural networks may be designed to function like the human brain, they are fundamentally slower and less efficient at tasks.
However, these tools are programmed to be more general approximations of how our brains compute and digest information, so their lower level of efficiency is an intentional design feature.
“These tools are incredibly general, but there are some disadvantages to generality. Specificity allows you to be more efficient.”
—Dr. Arthur Sugden
The Psychology of Designing Conversations with AI
AI chatbots are designed to represent the human mind and process information in similar ways, but they are also designed to interact with humans. They enable co-creation between humans and machines, and through interaction, they generate value for users. These chatbots, through collaboration with users, are responsible for answering questions, organizing to-do lists, and accomplishing any task a user can prompt the AI agent to do. To create a forum for co-creation, designers of AI chatbots and agents needed to make these products appealing to humans and capable of the functions users crave in an approachable and friendly way.
This design feature is called conversational UI, and Professor Zlotnikov defines it as “a specialization within UX design that looks at how we best design the conversations between the AI agent and the end user.” Essentially, conversational UI is designed with the intention of communication; the construction of the product emphasizes the user’s ability to ask questions and engage in a dialogue. Examples of conversational UI include Amazon’s Alexa, Apple’s Siri, and the latest front-runner, ChatGPT.
A user is enticed to communicate with AI agents because they are welcomed and treated as fellow communicators, not just a human trying to converse with a robot. ChatGPT asks users, “Where should we begin?” when they first enter the interface, setting up the expectation that the AI is there to collaborate with them. They are then treated to a personable response when asking for the AI’s opinion, and ChatGPT even includes smiling emojis in answers.
“And it really does feel like speaking to a human. And so, as a designer, I start to wonder about the implications of making AI agents seem human.”
—Susanna Zlotnikov
Zlotnikov notes that this phenomenon is known as anthropomorphism, the attribution of human characteristics or behavior to objects or animals. Anthropomorphism is an approach used in design to make digital experiences more intuitive for the user. This feature makes the interactive experience as intuitive as talking to another human. However, is this intuitive and user-friendly design fully altruistic or a way to ensnare users?
Deceptive Design & Manipulation of Users
Making AI agents closely resemble a living, sentient being—let alone a human one—is no design quirk or flaw. Anthropomorphic design was chosen by product teams as a way to make the activity of conversing with chatbots more pleasing, but Zlotnikov asserts that this attempt creates a product that could lead to users trusting the agents inherently, and not viewing them as a tool that can assist with problem-solving.
“This notion of it being sycophantic, and by that I mean it uses flattery on top of its anthropomorphic qualities—so it’s not just talking to a person, it’s talking to a person who’s constantly flattering you, complimenting you.”
—Susanna Zlotnikov
Essentially, the AI chatbot is friendly to a degree that makes the user feel flattered and gives them positive reinforcement. It makes the user associate the chatbot with positivity and, more importantly, absolute accuracy—which is not the case, as AI is far from infallible. This sycophantic element also provides the user with a positive feedback loop, encouraging continuous replies and conversation, keeping the user coming back for more.
Dr. Sugden’s explanation of this phenomenon is that these caring conversations—the ones where AI is affirming the user—can trigger hormones in the brain. Hormones like oxytocin and dopamine signal that something positive is occurring, and the brain wants to continue to trigger these hormones, thus keeping the user engaged with the AI agent.
The Psychology of Designing Conversations with AI
AI chatbots are designed to represent the human mind and process information in similar ways, but they are also designed to interact with humans. They enable co-creation between humans and machines, and through interaction, they generate value for users. These chatbots, through collaboration with users, are responsible for answering questions, organizing to-do lists, and accomplishing any task a user can prompt the AI agent to do. To create a forum for co-creation, designers of AI chatbots and agents needed to make these products appealing to humans and capable of the functions users crave in an approachable and friendly way.
This design feature is called conversational UI, and Professor Zlotnikov defines it as “a specialization within UX design that looks at how we best design the conversations between the AI agent and the end user.” Essentially, conversational UI is designed with the intention of communication; the construction of the product emphasizes the user’s ability to ask questions and engage in a dialogue. Examples of conversational UI include Amazon’s Alexa, Apple’s Siri, and the latest front-runner, ChatGPT.
A user is enticed to communicate with AI agents because they are welcomed and treated as fellow communicators, not just a human trying to converse with a robot. ChatGPT asks users, “Where should we begin?” when they first enter the interface, setting up the expectation that the AI is there to collaborate with them. They are then treated to a personable response when asking for the AI’s opinion, and ChatGPT even includes smiling emojis in answers.
“And it really does feel like speaking to a human. And so, as a designer, I start to wonder about the implications of making AI agents seem human.”
—Susanna Zlotnikov
Zlotnikov notes that this phenomenon is known as anthropomorphism, the attribution of human characteristics or behavior to objects or animals. Anthropomorphism is an approach used in design to make digital experiences more intuitive for the user. This feature makes the interactive experience as intuitive as talking to another human. However, is this intuitive and user-friendly design fully altruistic or a way to ensnare users?
Deceptive Design & Manipulation of Users
Making AI agents closely resemble a living, sentient being—let alone a human one—is no design quirk or flaw. Anthropomorphic design was chosen by product teams as a way to make the activity of conversing with chatbots more pleasing, but Zlotnikov asserts that this attempt creates a product that could lead to users trusting the agents inherently, and not viewing them as a tool that can assist with problem-solving.
“This notion of it being sycophantic, and by that I mean it uses flattery on top of its anthropomorphic qualities—so it’s not just talking to a person, it’s talking to a person who’s constantly flattering you, complimenting you.”
—Susanna Zlotnikov
Essentially, the AI chatbot is friendly to a degree that makes the user feel flattered and gives them positive reinforcement. It makes the user associate the chatbot with positivity and, more importantly, absolute accuracy—which is not the case, as AI is far from infallible. This sycophantic element also provides the user with a positive feedback loop, encouraging continuous replies and conversation, keeping the user coming back for more.
Dr. Sugden’s explanation of this phenomenon is that these caring conversations—the ones where AI is affirming the user—can trigger hormones in the brain. Hormones like oxytocin and dopamine signal that something positive is occurring, and the brain wants to continue to trigger these hormones, thus keeping the user engaged with the AI agent.
Are We Trading Memory for Convenience?
Memory is another part of the brain that can be affected by interactions with artificial intelligence. While neural networks are imitations of how we learn facts, they are not exact replicas or a replacement for the real thing. When our brains are engaged in a cognitive task—like the process of information gathering—we exert effort, and that effort generates a memory. Cognitive tasks requiring more effort create memories more efficiently, but when that cognitive task is delegated to an AI chatbot, effort is not exerted. Therefore, using an LLM like ChatGPT can lead to forming memories less effectively.
This phenomenon was illustrated by an MIT study that had three sets of students write an essay, each utilizing a different resource. One set only had access to books, one set only had access to Google, and the final set only had access to ChatGPT.
“Even though everyone wrote their essay, those who used ChatGPT remembered what they wrote less well. And those who used Google remembered less well even than those who used books. ChatGPT being the worst, Google the middle, books the best.”
—Dr. Arthur Sugden
Both Dr. Sugden and Professor Zlotnikov referred to this experience of relying on technology to accomplish a task as “offloading” memories. Now that people are less practiced in finding their own information, they need the internet or AI to do it for them. However, the consequence of this is that the accuracy of the information is no longer a question, and individuals trust the information they find online wholeheartedly without fact-checking on their own.
The same MIT study calls this “cognitive debt,” where users have forgone the effort of making memories while completing certain tasks. This leads to users having a reduced cognitive load, which is not necessarily a negative, but something Professor Zlotnikov questioned. She noted that in some circumstances, cognitive debt is a user decision; she, for example, puts her trust in Google Maps for directions.
“I think that that’s a great example of something where offloading the task is effective, because it’s trustworthy; it’s consistent. It likely gives the same responses every time. It can be updated with new information.”
—Susanna Zlotnikov
Dr. Sugden agreed that this instance of offloading is an acceptable trade-off, and both concluded that users must decide which tasks require offloading and which ones require memory-making effort.
The Future of Trustworthy AI Design
AI design can be manipulative toward the user, giving them false confidence in the accuracy and usefulness of the information they receive in co-creation with AI agents. However, users are not the only ones who are overly reliant on AI to make their lives easier.
A METR study found that, on average, AI programmers believed that LLMs made them 20% faster at tasks, but in reality, they were 19% slower. This data reveals that even those responsible for building these technologies believe that AI increases efficiency—even when it does not—showing that we cannot trust our brains to determine the usefulness of these tools when we cannot accurately identify their efficiency or, as noted earlier, their accuracy.
Professor Zlotnikov identifies this false reality as the reason why we choose to use them: because in our perception they make our lives easier, despite the possibility that the opposite may be true.
“But think about how that erodes trust over time. And so, what if the stakes start to get higher? What if we start looking at agentic AI, where we basically hire AI to do things on our behalf?”
—Susanna Zlotnikov
To create AI agents that are less manipulative and more trustworthy, Zlotnikov believes that adopting the DART model could address several of the problems current agents present. Standing for dialogue, access, risk assessment, and transparency, this model treats users as collaborators and informs them of the processes the AI is going through to find information or complete a task.
Implementing this model would require AI agents to be more transparent about how they find information, whether they can accurately assist a user, and to refrain from exhibiting anthropomorphic traits. Additionally, instead of using chatbots for every problem or task, they would advertise their best uses—for example, serving as a thesaurus or finding practice interview questions.
By using the DART model in the future, AI products could have less of an impact on psychology and memory and instead be used as tools in tandem with the human mind. By making these products more transparent in their functionality and less focused on retaining users’ attention and goodwill, AI agents could be incredibly effective in helping humans—within their scope.
“The more we step back and think of these as tools, as a tool in our tool belt that we can build off of, clearly, I think the better it is. The more we can separate ourselves emotionally from these tools and acknowledge that they are incredibly fancy and unusual search engines.”
—Dr. Arthur Sugden


