Carnegie Mellon University

Center for Informed Democracy & Social - cybersecurity (IDeaS)

CMU's center for disinformation, hate speech and extremism online

IDeaS Center for Informed Democracy & Social-cybersecurity

An example of a tweet that shared a story about how the US, the UK, and Germany have staged Navalny's poisoning.

January 24, 2022

How Information Operations against Russian Opposition on Twitter Attempted to Influence Global Audiences

By Iuliia Alieva

Image caption: An example of a tweet that shared a story about how the US, the UK, and Germany have staged Navalny's poisoning.

Tags: social media; disinformation; social network analysis; bot; troll; information operations; influence operations

Recent Publication:

Alieva and K. M. Carley, "Internet Trolls against Russian Opposition: A Case Study Analysis of Twitter Disinformation Campaigns against Alexei Navalny," 2021 IEEE International Conference on Big Data (Big Data), 2021, pp. 2461-2469, doi: 10.1109/BigData52589.2021.9671589. URL: https://ieeexplore.ieee.org/document/9671589

With the development of technologies, journalism, and social media, the issue of investigating activities of malicious actors online became a new interdisciplinary area. Scientists, reporters, and NGOs are attempting to find ways to detect coordinated inauthentic behavior, mis/disinformation campaigns, and hate speech as well as to generate solutions for prevention of harmful activities.

The academic community has recently started recognizing the importance of the problem. The 2016 U.S. presidential election, Brexit, and other political campaigns have demonstrated that there are multiple and widespread attempts to manipulate public opinion online. The activities of bots and trolls attracted a lot of attention from scientists in various fields. That was the moment - the global community found out about the existence of troll farms, particularly, the Russian Internet Research Agency was identified as a main source of malicious activities (Bastos & Farkas, 2019; Linvill & Warren, 2020).

The recent COVID-19 pandemic has exacerbated the problem causing an even more uncontrollable wave of disinformation about the virus, vaccines, preventive measures, and treatments (check out our project about COVID-19 vaccine conversations in PA). However, bots and trolls have been used by authoritarian governments a long time with the purpose of exercising stronger information control and manipulating protest activities.

Online Information Operations as a Tool of Authoritarian Control

Authoritarian regimes started using bots and trolls to facilitate their political goals a long time ago.  Offline demobilization and online agenda control are mentioned as the main end goals for the political use of social media actors such as bots, trolls, and cyborgs (Stukal et al., 2020). The way platforms and algorithms work makes it even easier to target the right audience and helps to reach people domestically and internationally for the purpose of political influence. With plenty of bots, it is possible to build disinformation chaos, test political attitudes, infiltrate online conversations, and communicate to  a diverse group of users trying to promote opinions that would support the regime. The presence of the Internet Research Agency demonstrates that the current Russian political regime employs bot activities to serve its domestic and foreign policy goals in order to worsen polarization and insert a distrust in institutions of political opponents. With our recent study, we found that even a domestic political issue within Russia, such as discussions about Russian opposition, is used to influence international audiences in order to undermine democratic institutions globally.

How Russian domestic politics became a global influence campaign

Our study provides a case study analysis where we found Twitter bot operations that focus on the internal political confrontation between the Russian systemic political establishment and the opposition movement of Alexei Navalny. We present an analysis of how Internet trolls and sockpuppets are used to conduct information disorder activities to frame the discussion around the opposition movement in Russia on Twitter. We also identified attempts to manipulate the opinion of the Western audience and to spread disinformation about Western democracies by the same malicious actors. The study implements network analysis for identifying disinformation and propaganda trolls. We observe how an internal domestic issue is framed in the context of Russian confrontation with the West and how it is used to promote hostile narratives against democratic institutions presenting them as enemies of Russia and the world.

 

In our case, we cannot say the accounts we identified are affiliated with IRA or any other group or government although they clearly demonstrate the characteristics of IRA accounts that were identified in previous research. Previous studies found that IRA accounts exploited group identities of communities on Twitter, focused on political agenda, and utilized users’ ideological backgrounds (Freelon & Lokot, 2020). Political content is often used by IRA accounts to spark political distrust and intergroup hate. In our study, troll and bot accounts interacted with authentic users to infiltrate their communities and spread their political agenda.

Many agents that we identify pretend to be real people, English speakers, who exhibit hostile attitudes towards Navalny and Western democracies, promoting a lack of trust in democratic institutions as well as spreading disinformation and conspiracy theories. Often, they exploit democratic values or focus on issues from the discourse of Western democracies by utilizing political polarization, promoting lack of trust in politicians, government, and the media with amplification of negative sentiments among English-speaking users.

Future research should be dedicated to the analysis of strategies and types of malicious activities online to improve identification of operations as well as mechanisms for its prevention.