Carnegie Mellon University

Illustration of a magnifying glass

May 01, 2025

Dr. Emily DeJeu on Using Large Language Models to Analyze Sensitive Discourse

Commentary on Article on Coding Hate Speech Offers Nuanced Look at Limits of AI Systems

Caitlin Kizielewicz

Large language models (LLMs) are artificial intelligence (AI) systems that can understand and generate human language by analyzing and processing large amounts of text. In a new essay, a Carnegie Mellon University researcher critiques an article on LLMs and provides a nuanced look at the models’ limits for analyzing sensitive discourse, such as hate speech. The commentary is published in the Journal of Multicultural Discourses.

“Discourse analysts have long been interested in studying how hate speech legitimizes power imbalances and fuels polarization,” says Emily Barrow DeJeu, Assistant Teaching Professor of Business Management Communication at Carnegie Mellon’s Tepper School of Business, who wrote the commentary. “This seems especially relevant today amid rising populism, nativism, and threats to liberal democracy.”

DeJeu’s commentary is of an article that appears in the same issue of the Journal of Multicultural Discourses entitled “Large Language Models and the Challenge of Analyzing Discriminatory Discourse: Human-AI Synergy in Researching Hate Speech on Social Media,” by Petre Breazu, Miriam Schirmer, Songbo Hu, and Napoleon Katsos. The article explores the extent to which LLMs can code racialized hate speech.

Using computerized tools to analyze language is not new. Since the 1960s, researchers have been interested in computational methods for examining bodies of work. But some forms of qualitative analysis have historically been considered strictly within the purview of human analysts, DeJeu says. Today, there is increasing interest in using new LLMs to analyze discourse.

Unlike other analytical tools, LLMs are flexible: They can conduct an array of analytical tasks on a variety of text types. While the article by Breazu et al. is timely and significant, DeJeu says it also presents challenges because LLMs have strict safeguards to prevent them from issuing offensive, harmful content.

While DeJeu commends the authors for doing human- and LLM-driven coding of YouTube comments made on videos of Roma migrants in Sweden begging for money, she identifies two problems with their work:

  • Methodological issues: DeJeu suggests that the authors’ methodological design seems to conflict with their goal of exploring human-AI synergies. Instead, it introduces a human-versus-AI binary that persists throughout the article, so the piece ultimately reads less as an exploration of human-AI synergies and more as an indictment of ChatGPT’s abilities to code like an expert researcher.
  • A flawed conclusion: DeJeu says Breazu and colleagues’ call for culturally and politically informed LLMs goes beyond simply expanding LLMs’ knowledge bases; the authors seem to want a future in which LLMs can act as situated humans would, bringing politically and culturally informed perspectives to bear on their analysis and reasoning from those perspectives to interpretations of reality. She asks: “Is it reasonable to expect AI tools to do this, when human history shows that cultural meaning is constructed, contested, and subject to change?”

DeJeu says the article is valuable in considering how new the definition of synergy is when working with AI tools. She concludes her commentary by addressing what roles LLMs should play in critical discourse analysis. Should LLMs be used iteratively to refine thinking, should researchers try to get them to perform like humans to validate or semi-automate resource processes, or should there be some combination of both?

“The field will probably eventually clarify what human-AI coding looks like, but for now, we should consider these questions carefully, and the methods we use should be designed and informed by our answers,” DeJeu cautions.

###

Summarized from a commentary in the Journal of Multicultural Discourses, Can (and Should) LLMs Perform Critical Discourse Analysis? by DeJeu, EB (Carnegie Mellon University). Copyright 2025 Informa UK Ltd. All rights reserved.