Carnegie Mellon University

Center for Informed Democracy & Social - cybersecurity (IDeaS)

CMU's center for disinformation, hate speech and extremism online

IDeaS Center for Informed Democracy & Social-cybersecurity

robot online

July 19, 2022

How Bots Manipulated Online Hate in the Pandemic

By Joshua Uyheng

& Daniele Bellutta

Direct link to paper, published July 15, 2022: https://doi.org/10.1177/20563051221104749

tags: racism; hate speech; bots; social media; COVID-19 pandemic
Image Credit: VectorStock 

The global COVID-19 pandemic saw massive waves of disease and infection but also crises of social division.

Race and racism, especially, came to the forefront of public discussions, with many describing a “racism pandemic” coinciding with the spread of the coronavirus. Minoritized racial and ethnic groups in the United States and around the world were subjected to targeted discrimination amid public health turmoil, and disproportionately experienced higher death tolls from the disease and its complications.

On social media, racist attacks likewise found a fertile space in which to spread. Yet while many instances of online hate did stem from genuine racist sentiments, efforts to inauthentically manipulate them were likewise in no short supply.

We investigated such activities in our latest paper out at Social Media + Society. In this study, we discovered that automated accounts—also known as social bots—did not only amplify the amount of hate in online racism conversations during the pandemic. They also managed to shift the targets of online hate in these discussions, funneling social media dialogue about race into polarizing discourse about the 2020 U.S. presidential elections.

Automating and Amplifying Online Hate

Collecting hundreds of thousands of tweets from March and August 2020, we measured the likelihood each tweet contained hate speech using the CASOS/IDeaS hate speech detection model. We also used the Netmapper software to identify whether each tweet mentioned various types of identity types, including racial identities, gender identities, political identities, and religious identities.

By identifying bots in the dataset with the BotHunter tool, we then tracked the extent to which increased bot activity at one point in time subsequently predicted greater levels of hate speech. Sure enough, significant statistical relationships indicated that when there were more bots in the conversation now, there would be more hate speech later on. Interestingly, however, bot activity and hate speech measured at the same time were not significantly correlated.

Practically, what this means is that although bots themselves might not be the most hateful accounts, their online activities may nonetheless trigger hate speech from others in the conversation—perhaps by sharing controversial information or connecting groups that disagree with each other to encourage vitriolic exchanges.

From Organic Racism to Inorganic Political Arguments

More than sheer amounts of online hate, however, we also found that bot activity was linked to shifts in the target of online hate.

In particular, we saw that hate speech in March typically targeted racial groups of Asian descent and that most of these messages were produced by humans. By August, however, more of the hate speech was directed toward American political figures and generated by bots.

 For instance, in tweets mentioning racism, words like “Chinese” and “Asian” were mentioned significantly less by bots. Meanwhile, words like “American” and “President” were mentioned significantly more by bots.

Putting these observations together, we thus determined that while online hate toward Asian and Chinese people originated from humans—coinciding with much that has been written about racist sentiments around the origins of the pandemic—social bots were able to leverage the massive online attention around these conversations to sow discord around the U.S. elections.

Stopping Online Hate and Its Manipulation

Collectively, what these findings suggest is that to stop online hate—and more broadly, to address the problem of systemic racism—we also need to pay attention to the ways malicious actors can manipulate and redirect hate on social media.

Computational tools to detect such operations offer an important first step to monitor when such inorganic activities take place. But more than quantifying their presence at a high level, it is also crucial that researchers reflectively engage with the wider societal fractures that online manipulation efforts highlight and seek to exploit.

Especially during a global crisis, social media can act as a mirror that reflects society’s deep divisions. But digital platforms may also serve as avenues to change and transform these rifts: for better or for worse. From this standpoint, perhaps ensuring social cybersecurity offers an important step toward holistically addressing these issues in our fast-changing world.