Carnegie Mellon University

Center for Informed Democracy & Social - cybersecurity (IDeaS)

CMU's center for the study of disinformation, hate speech and extremism online

IDeaS Center for Informed Democracy & Social-cybersecurity

About Our Research

The Center for IDeaS will advance the science of social-cybersecurity through research on disinformation, hate-speech, cyber-bullying and insider threat, and through community building activities.

Research in IDeaS fall under the following areas:

1.Social Cyber-Forensics  (Who is acting & reacting?)
2.Information Threats and Challenges  (How are we attacked?)
3.Intent Identification  (Why are they doing this?)
4.Indicators and Warnings  (How can we detect harmful activities?)
5.Inoculation  (How can people protect themselves?)
6.Countering  (What responses are most effective in creating resilience?)

Large Research Projects

PROJECT OMEN

logo-project-omen.png

The goal of PROJECT OMEN is to establish a scalable training environment for helping people to understand how they can be influenced and are influencing others in social media.  Based on a train as you work philosophy, PROJECT OMEN, will help people understand how people are impacted by disinformation, hate speech, and extremism online through immersive play.  The CMU IDeaS center is part of the large multi-institution group working on PROJECT OMEN.  IDeaS has several lines of effort in PROJECT OMEN reflecting the breadth of research, training, and outreach in the center.

Critical Thinking in the Information Ecosystem
Researchers: Dr. David Danks, Dr. Mara Harrell
Collaborators: Office of Naval Research, Cognitive Performance Group 

Critical thinking, reasoning, and information evaluation all play key roles in our ability to successfully navigate the complex information ecosystem. This project is developing a set of critical thinking education & training modules to improve people’s ability to recognize narratives, understand the role of evidence, learn from varieties of content, and design responses or interventions to increase their own social cybersecurity. These modules will be usable by people from a range of backgrounds, and for a range of purposes.

Recognizing and Countering Influence Campaigns and Online Hostility
Researchers: Dr. Kathleen M. Carley, PH.D. Students: Catherine King, Christine Lepird
Collaborators: Office of Naval Research, Intelligent Automation Inc., SoarTech, Air Force Research Lab 

Influence campaigns in the new information environment are conducted by altering either or both who is talking to whom (the social structure) and who is talking about what (the narrative).  This project is developing an immersive gameplay environment complete with a set of training materials,  educational content, data and lesson plans for helping people to identify, characterize, and counter (if needed) online hostile activity and actors.  Particular attention is payed to helping the trainee learn to: 1) recognize bot, troll, and state sponsored accounts; 2) recognize hate speech and online recruitment attempts; 3) understand how inauthentic actors help in the spread of disinformation and hate speech, and in recruiting to extremist groups; 4) recognize information maneuvers; 5) improve the effectiveness of their response to online hostility.

Media Manipulation

logo-manipulated-media.jpg

Media can be manipulated in many ways from the creation of deep fake videos making it look like some is saying or doing something they are not, to simpler changes that alter what issues are trending and so are prioritized to the reader. In some cases, media manipulation may promote social good, in others it is used to destroy reputations, spread hate speech, attack at risk minorities, or recruit people to extremist causes.  The goal of the media manipulation research is to understand how we can help individuals and communities be more resilient in the face of such manipulated media, and to identify the principles and guidelines for separating acceptable from unacceptable manipulation.  IDeaS has several lines of effort in media manipulation.

Ethics & Policy of Manipulated Media
Researchers: Dr. David Danks, PH.D. Student: Jack Parker
Collaborators: Accenture Labs, Carnegie Endowment for International Peace 

Manipulated media--particularly heavily edited or synthetic images and videos—present some of the most significant challenges to our social cybersecurity. Pictures and movies can be particularly compelling, both psychologically and emotionally, and so it can be hard to protect ourselves against false or misleading narratives and presentations. This project is examining ethical and policy issues around manipulated media, especially in the context of social media platforms: What ought they do about manipulated media? What can they do given existing policies and regulations? How might those policies change, or new ones develop, that enable more ethical responses to manipulated media?

Content Moderation & Inauthentic Actors
Researchers: Dr. Kathleen M. Carley, PH.D. Students: Joshua Uyheng, Daniele Bellutta
Collaborators: Accenture Labs, Office of Naval Research
Recent graduates: Dr. David M. Beskow 

This project has two foci: understanding the role of inauthentic actors and assessing alternative ways of countering their activity.  For the first foci, the goal is to understand how inauthentic actors are used to manipulate which media is being consumed, and what content is prioritized. In authentic actors include bots, trolls and cyborgs.  This project explores the signs that these actors are operating in isolation or in a coordinated fashion, the extent to which they are successful in spreading disinformation and hate speech, and the strategies they use for spreading such information.  For the second foci, the goal is to understand whether actor suspension or content moderations strategies are effective at stopping manipulation by these actors. This project considers what policy and remediation strategies are needed to counter online hostility, given that the technologies used for identifying inauthentic actors and identifying harmful messaging such as disinformation, conspiracy claims and hate speech, do not operate error free.

Automated Early Warning System for Cyber-Intrusion Detection

Cyber-attacks are a critical problem for society, and malware is used by attackers in a large class of cyber-attacks. The volume of malware produced is overwhelming, with hundreds of thousands of new samples discovered every day. The vast majority of malware samples are actually variants of existing malware, produced by transforming or obfuscating an existing sample in such a way that it can evade detection by end-point security products and other defenses.

The focus of this project is on automated detection of cyber intrusion within end-user computing devices, meaning any dedicated computing device from a smart sensing node, to a smart phone, all the way up to a dedicated laptop or desktop computer. A long-term trend over the past 30 years has been a steady increase in the number of independent end-user computing devices that are associated with an individual Human User. This trend is a natural consequence of the steady increase in capability and decrease in cost of portable consumer electronics.

End-user computing devices are under increasingly sophisticated attacks from state and non-state adversaries and the increasing number of computing devices per Human User further increases the threat cross-section for such attacks. This project proposes to add a security co-processor that uses Machine Learning techniques to identify behaviors that are substantially different from the learned normal behavior of the human user. The research will extend to exploring the ways to model the human user, ways for the security processor to interact with the human to improve security, and ways for the security processors on an local organization's devices can collaborate in order to improve the local organization's security.

Associates: Professor L. Richard Carley, Carnegie Mellon University, ECE; Professor Kathleen M. Carley, Carnegie Mellon University, ISR; Professor Diana Marculescu, University of Texas Austin, ECE; Office of Naval Research / DoD

Projects on Polarization

Project #1

Like playing mental chess, comprehending real-life political arguments presented in prose places unrealistic demands on people’s limited cognitive resources (e.g., working memory). So, instead of reasoning about an argument as stated, most people reason about their own overly simplified representations – summaries of an author’s argument created on the fly. The psychological processes that generate these simplified representations are highly susceptible to automatic and effort-minimizing biases such as confirmation bias.

This suggests a promising explanation for why people are so often biased when evaluating arguments that oppose their own views. Moreover, it points to concrete methods for reducing political polarization. In a preliminary study, Professor Simon Cullen (CMU) and Vidushi Sharma (Princeton) found that presenting an argument visually, using color, line, and shape to display its logical structure dramatically reduces confirmation bias compared to presenting the exact same argument in identical prose. Recently, they replicated the original findings, while also discovering target arguments where visual presentation did not reduce confirmation bias. What makes the difference? In current work, Cullen, Professor Daniel Oppenheimer (CMU, SDS/Psychology), Nick Byrd (CMU, Intelligence Community Postdoctoral Fellow), and Sharma investigate if the bias-reducing power of visual presentation emerges only when participants evaluate counter-attitudinal arguments that appeal to their own values

To test this hypothesis, we developed arguments for paradigmatically liberal political views that have the unusual feature of appealing to paradigmatically conservative values (e.g., a pro-choice argument that appeals to the value of law and order), and vice versa (e.g., a pro-life argument that appeals to the interests of marginalized groups). We also developed matched arguments that lack this feature (e.g., a pro-choice argument that appeals to gender equality, a pro-life argument that appeals to the values of small government). We predict that the depolarizing power of visual argument presentation is real, but that it only emerges when people evaluate arguments that ultimately “speak their moral language.” That is, to be persuasive, political and moral arguments must not only communicate their logical structures clearly; these structures must also ‘bottom out’ in values their intended audiences actually share.

Associates: Professor Simon Cullen, Carnegie Mellon University, Philosophy; Professor Daniel Oppenheimer Carnegie Mellon University, SDS/Psychology

Research Activities

Science of Social Cybersecurity

  1. Kathleen M. Carley (panel member), 2019, National Academies of Sciences, Engineering, and Medicine. A Decadal Survey of the Social and Behavioral Sciences: A Research Agenda for Advancing Intelligence Analysis. Washington, DC: The National Academies Press. [DOI]
  2. Kathleen M. Carley, Guido Cervone, Nitin Agarwal, Huan Liu, 2018, “Social Cyber-Security,” In Proceedings of the International Conference SBP-BRiMS 2018, Halil Bisgin, Ayaz Hyder, Chris Dancy, and Robert Thomson (Eds.) July 10-13, 2018 Washington DC, Springer. [DOI]

Social Cyber Forensics

  • Bot Hunter
    1. Beskow, David & Carley, Kathleen M. (2018). Bot Conversations are Different:Leveraging Network Metrics for Bot Detection in Twitter. In Proceedings of the 2018 IEEE/ACM international Conference on Advances in Social Networks Analysis and Mining (ASONAM), 825-832. IEEE. [DOI]
    2. Beskow, David & Carley, Kathleen M. (2018). Introducing Bothunter: A tiered Approach to Detection and Characterizing Automated Activity on Twitter. In Proceedings of the 2018 SBP-BRiMS Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction and Behavior Representation in Modeling and Simulation, Washington, DC, June 10-13, 2018, Springer., [pdf]
    3. David M. Beskow and Kathleen M. Carley, “Bot Conversations are Different: Leveraging Network Metrics for Bot Detection in Twitter,” 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Barcelona, Spain, 2018, pp. 825-832. [DOI
    4. David M. Beskow and Kathleen M. Carley, 2019, Social Cybersecurity: An Emerging National Security Requirement, Military Review, March-April2019 [link]

Social Cyber Forensics (continued)

  • Meme Hunter
    1. David M. Beskow, Sumeet Kumar, and Kathleen M. Carley, 2020, The Evolution of Political Memes: Detecting and Characterizing Internet Memes with Multi-Modal Deep Learning, Information Processing & Management Vol. 57, Issue 2. [link]
  • Social Media Characterization
    1. Binxuan Huang and Kathleen M. Carley, 2018, “Location Order Recovery in Trails with Low Temporal Resolution,” IEEE Transactions on Network Science and Engineering. [DOI]
    2. Binxuan Huang, Yanglan Ou and Kathleen M. Carley, 2018, “Sentiment Classification with Attention-over-Attention Neural Networks,” In Proceedings of the International Conference SBP-BRiMS 2018, Halil Bisgin, Ayaz Hyder, Chris Dancy, and Robert Thomson (Eds.) July 10-13, 2018 Washington DC, Springer. [DOI
    3. Binxuan Huang and Kathleen M. Carley, 2019, “A Large-Scale Empirical Study of Geotagging Behavior on Twitter”, In Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Vancouver, Canada. [DOI]

Iain J. Cruickshank & Kathleen M. Carley, "Analysis of Malware Communities Using Multi-Modal Features."  IEEE Access, vol. 8, pp. 77435-77448, 2020, https://doi.org/10.1109/ACCESS.2020.2989689

Kenny R, Fischhoff B, Davis A, Carley KM, Canfield C. Duped by Bots: Why Some are Better than Others at Detecting Fake Social Media Personas. Human Factors. February 2022. doi:10.1177/00187208211072642

Blane J, Bellutta D, Carley K Social-Cyber Maneuvers During the COVID-19 Vaccine Initial Rollout: Content Analysis of Tweets J Med Internet Res 2022;24(3):e34040 URL: https://www.jmir.org/2022/3/e34040 DOI: 10.2196/34040

Information Threats and Challenges

  • BEND
    1. David M. Beskow and Kathleen M. Carley, 2019, Social Cybersecurity: An Emerging National Security Requirement, Military Review, March-April 2019. [link

Intent Identification

  • Influence Campaigns in Elections
    • Asia Pacific
      1. Joshua Uyheng and Kathleen M. Carley, 2019, “Characterizing Bot Networks on Twitter: An Empirical Analysis of Contentious Issues in the Asia-Pacific,” In Proceedings of the International Conference SBP-BRiMS 2019, Halil Bisgin, Ayaz Hyder, Chris Dancy, and Robert Thomson (Eds.) July 9-12, 2019 Washington DC, Springer. [DOI

Indicators and Warnings

  • BEND
    1. David M. Beskow and Kathleen M. Carley, 2019, Social Cybersecurity: An Emerging National Security Requirement, Military Review, March-April 2019. [link

Community Building Activities