Carnegie Mellon University

Center for Informed Democracy & Social - cybersecurity (IDeaS)

CMU's center for disinformation, hate speech and extremism online

IDeaS Center for Informed Democracy & Social-cybersecurity

 

2020 has been a year marked by worldwide crises and changes, including a global pandemic, efforts to increase social justice, nation-shaping elections, massive wildfires and other environmental events, to name just a few. These crises and changes—in particular, their social dimensions—have been directly impacted and shaped by disinformation, influence campaigns, and other efforts to undermine people’s understanding and autonomy.

We will hold a special virtual conference on Social-cybersecurity in Times of Crisis and Change this November 2020. This conference aims to advance the science of social-cybersecurity through research and applications that address two questions. How does the unprecedented scale of false information and its spread impact human activity in times of crisis and social change? And, what are the mechanisms that enable or contain the spread of false information during these times?

The institute and conference charge no registration fee but registration is required.

Conference Highlights:

  • The virtual institute will include: invited panels, virtual posters, regular talks and tutorials.
  • "Selling Lies" with director Leslie Iwerks
First we will screen the film together and then open the discussion with a Q&A session with director Leslie Iwerks moderated by IDeaS Co-Director David Danks.   
  • Special Double Panel on Disinformation, Social Cybersecurity: The Path Ahead
Moderated by IDeaS Co-Director Kathleen M. Carley, panelists will discuss current forms of disinformation in our public discourse. Panelist include David A. Broniatowski, Conrad Tucker, David Mussington and Filippo Menczer. Social cybersecurity is an emerging scientific discipline critical for the future. Panelists will discuss the area and illustrate the types of tools and technologies that exist and that are needed in this emergent area.  Examples are drawn from diverse events including COVID-19 and the US elections.
  • The challenge of deepfakes and manipulated content - Panel and Q&A
This panel moderated by IDeaS Co-Director David Danks will discuss the challenges of manipulated content. 
  • IDeaS Knight Fellows
Emerging science in social cybersecurity is demonstrated in these talks by the CMU IDeaS Knight Fellows.
View Videos here. Guest password to view, ideas2020





Institute & Conference Agenda

**tentative, all times US Eastern

last update Tuesday November 17th 2:05pm

Live and Taped Tutorials are offered

This is a free event 

Taped Tutorials will be available beginning at 9 am on November 18th through November 21st.

Following is the schedule for live tutorials. Additional information on the tutorials are below.

10:20am Critical thinking and resilience - Dr. David Danks
1:00pm Social Influence Campaigns - Dr. Kathleen M. Carley
3:00pm Deepfakes and synthetic video - Dr. David Danks
4:30pm Introduction to Network Analytics for Social Media - Jeff Reminga
  • Social Influence Campaigns (live on November 18th)
This tutorial will discuss how social influence campaigns are conducted in social media.  The nature of social media and disinformation are reviewed.  Information maneuvers used to conduct influence campaigns are presented.  Technology pipelines for tracking and assessing social influence campaigns are presented and key tools demoed. Examples are drawn from the EuroMaidan Revolution, Covid-19, various elections and natural disasters.  Key topics covered are social cybersecurity, information warfare, social influence, metrics for assessing influence, and misleading indicators.
  • Facebook and Reddit Data in ORA (pre-recorded)
This tutorial will discuss how to visulaize and upload Reddit Data in ORA for analysis. Examples will be shown using Reddit data from current events.
  • Networked Time Series Analysis and Clustering (pre-recorded)
In network science, a trail is defined as a time-series of categorical states, and can be used to model many interesting phenomena. Trail clustering, then, is the problem of discovering similar trails in a trail dataset. For example, a trail dataset containing trails of url shares by users on social media may be clustered to find similar users. First, we will cover the necessary background in time series and network analysis. Then, we will cover trail analysis basics and trails comparison methods. From there, we will expand on different challenges faced in trail analysis, including heterogeneous state-types, higher-order states, and highly dynamic data. Finally, we will consider the scalability of the approaches discussed. A case study of social media analysis will be used to demonstrate the discussed trail clustering methods in practice.
  • Introduction to ORA (pre-recorded and live session on November 18th)
A lecture and hands-on workshop in which attendees learn about network science and the ORA toolkit. Using ORA the attendees will learn how to import, export, visualize, and assess data. Attention will be focused on processing Twitter, Blogs and YouTube data. Network analytics for content. Topic group detection. Participants will be presented with a thorough demonstration of software features used to create a sample network and analyze it. Sample data sets will be available.
  •    Critical Thinking and Resilience (live on November 18th)
As information — true, false, and misleading — spreads at ever-faster rates, it is increasingly important for individuals and communities to be able to think and reason critically about the world around them. This session will examine the cognitive bases of critical thinking and informational resilience, as well as ways to improve both in real-world settings.
  •   Deepfakes and synthetic video (live on November 18th)
Technological systems can now create highly realistic synthetic images and videos — pictures and movies that are indistinguishable from reality to the human eye. This session will examine current capabilities for both generation and detection of synthetic video, as well as potential responses to misleading or malicious synthetic content.

8:45am - 10:45 am: Polarization and Hate Speech

Social Media, News, Polarization, and Disinformation In Times Of Crisis: A Case Study On Turkey Baris Kirdemir, Nitin Agarwal
War on “Fact Check” -- the Path to Magic 270 Rodrigue Rizk, Dominick Rizk, Vijay Srinivas Tida, and Sonya Hsu
Visualizing Vitriol: Hate Speech and Image Sharing in the 2020 Singaporean Elections? Joshua Uyheng, Lynnette Hui Xian Ng, and Kathleen M. Carley
From Xenophobia to Political Confrontation: Shifting Online Discussions of Racism During the COVID-19 Pandemic Joshua Uyheng, Daniele Bellutta, and Kathleen M. Carley
Utilizing Topic Modeling and Social Network Analysis to Identify and Regulate Toxic COVID-19 Behaviors on YouTube Adewale Obadimu, Tuja Khaund, MaryEtta Morris, Esther Mead, and Nitin Agarwal

10:45am - 11:00am: Break

11:00am - 12:10pm: Emerging Technologies and Techniques 1

Human-Aware Interdisciplinary Models to Identify and Understand Disinformation Kai Shu and Huan Liu
Unsupervised Characterization of State-Sponsored Twitter Information Operations with Multi-view Clustering? Joshua Uyheng, Iain Cruikshank, and Kathleen M. Carley
Designing Assistive AI Technologies to Support Human Judging of Information Reliability Matthew Lease

12:10pm - 12:40pm: Lunch Break

12:40pm - 1:50pm: Emerging Technologies and Techniques 2

Cultural Convergence: Insights into the behavior of misinformation networks on Twitter Liz McQuillan, Erin McAweeney, Alicia Bargar, and Alex Ruch
Analysis of evolution of meme trends on 4chan.org's /pol/ board via image clustering J. Jin, E. Williams, S. Lam, O. Savas, E. Hohman, M. Bosch-Ruiz and P. Rodrigues
Designing a Training Game to Fight Misinformation on Social Media Catherine King and Christine Sowa


1:50pm - 2:00pm: Break

2:00pm - 3:00pm: "Selling Lies" Panel

Moderated by IDeaS Co-Director David Danks, we will screen the 30 min feature "Selling Lies" and have a Q&A with the director Leslie Iwerks.

3:00pm - 3:15pm: Break

3:15pm - 5:00pm: Special Double Panel on Disinformation - Part 1

  • David A. Broniatowski (George Washington University)
Can Communicating the Gist Combat Systematic Manipulation of the Online Vaccine Discourse?
In this talk, I will provide an overview of the different ways in which malicious actors manipulate the vaccine debate online in order to advance a range of hidden agendas. Specific attention will be given to the role of domestic organizations framing vaccine refusal as a civil right, and foreign organizations that have promoted vaccine and COVID-19 disinformation for geopolitical ends. These manipulations occur in an environment of deep scientific uncertainty that is fueled by the ongoing COVID-19 pandemic. Thus, we conclude with evidence testing “Fuzzy-Trace Theory” — a leading psychological theory of decision under deep uncertainty that provides guidance of public health communicators who wish to combat these information operations and promote healthy behaviors. 

  • Conrad Tucker (Carnegie Mellon University) 

From Generative Neural Networks to Social Media Networks: Ascertaining the Veracity of Data in the Information Age

Ascertaining the veracity of data in the information age is a challenge both for humans (e.g., communicating within social media networks) and machines (e.g., training data for artificial neural networks).  A lack of data veracity has the potential to “fool” both machines, as well as humans into achieving different outcomes/output. From a machine learning perspective, “fooling” a machine has had a positive impact in the development of algorithms such as generative adversarial networks (GANs), and has resulted in the ability of machines to generate hyper-realistic data such as images, 3D geometries, and text. However, adverse effects can be observed in large-scale social media networks, where the veracity of data cannot be quickly ascertained. Misinformation that is spread via social media networks can result in echo-chambers, lone communities that facilitate selective content diffusion as a result of user polarization. Ironically, this misinformation can now be reliably generated using machine learning algorithms such as GANs. Our research focuses on developing methods to both generate high quality data, and safeguard against data exploitation. Several application domains are explored including product design and development, healthcare physiology state estimation, and STEM education.


  • Mike Fulk (MITRE)
Examining the mechanics of the online QAnon movement and its evolution

MITRE mapped the QAnon online and social media space to detect related conspiracy theories and the changing tradecraft. Omelas tracked QAnon propagation on social networks after so-called bans on Twitter, YouTube, and Facebook and found the social networks playing key roles in dissemination of Q content.


5:00pm - 5:15pm: Break

 5:15pm - 6:25pm: ReOpen America

ReOpen demands as public health threat: a sociotechnical framework for understanding the stickiness of problematic content Francesca Bolla Tripodi
Information Processing on Social Media Networks as Emergent Collective Intelligence Martin Smyth, Cody Buntain, Debra Dwyer, Joseph Finn, Jason Jones, Joshua Garland, and Michael Egan
Exploring Opposing Twitter Activity during the Early Anti-Lockdown Protests Matthew Babcock

8:45am - 9:55am: COVID Conspiracies and Disinformation 1

Hunting Conspiracy Theories During the COVID-19 Pandemic J.D. Moffitt
Systems Thinking and Modeling in Social Networks: A Case Study of Controlling COVID-19 Conspiracy Theories Mustafa Alassad, Muhammad Nihal Hussain, and Nitin Agarwal
A Fine-Grained Analysis of Misinformation in COVID-19 Tweets Sumit Kumar, Raj Ratn Pranesh, and Kathleen M. Carley

9:55am - 10:05am: Break

10:05am - 11:35am: Interventions

Trust & Safety team: A 2020 Census Initiative to Mitigate the Impact of Misinformation on Civic Activities Stephen Buckner and Zack Schwartz
Truth Finding in Response to Misinformation Ebrahim Bagheri
Social Media Users' agency in Online Misinformation Sharing Wen-Ting Chung and Yu-Ru Lin
Multi-modal False Information Detection for Combating COVID-19 Infodemic Kaize Ding, Qianru Wang, Ujun Jeong, Bohan Jiang, and Huan Liu

The Effect of Twitter User’s Commenting Behavior on the Propagation of COVID-19 Misinformation

Muheng Yan, Yu-Ru Lin, and Wen-Ting Chung

11:35am - 12:15pm: Lunch Break

12:15pm  - 2:00pm: Deepfakes Panel “The challenge of deepfakes and manipulated content”

  • Sam Gregory (WITNESS)
  • Claire Leibowicz (Partnership on AI)
  • Steven Tiell (Accenture Labs)

2:00pm - 2:15pm: Break

2:15pm - 4:00pm: Special Double Panel on Disinformation - Part 2

  • David A. Mussington (University of Maryland)
A Framework for Evaluating Disinformation Harms from Social Media
Policy for managing harms from dis- and mis-information on social media must focus on the potential for deleterious effects, rather than on the platforms themselves. This sort of "platform agnostic" approach means that the policies that permit "current" platform operations need to be the focus of possible remedial measures, not the business operations of the social media providers. Operating as they do with the advantages of private business but with broad regulatory exemptions justified by business formation, rather than social value rationales, a new regime for interacting with proto-monopolists occupying the space between "bare communications connectivity" and digital services providers is necessary. This presentation offers just such a framework, and lists some potential metrics requirements that flow from it.
  • Filippo Menczer (Indiana University)
                   4 Reasons Why Social Media Make Us Vulnerable to Manipulation

As social media become major channels for the diffusion of

news and information, it becomes critical to understand how the
complex interplay between cognitive, social, and algorithmic biases triggered by our reliance on online social networks makes us vulnerable to manipulation and disinformation. This talk overviews ongoing network analytics, modeling, and machine learning efforts to study the viral spread of misinformation and to develop tools for countering the online manipulation of opinions.
  • Kathleen M. Carley (IDeaS Center, Carnegie Mellon University)
Social cybersecurity is an emerging scientific area focused on the science to characterize, understand, and forecast cyber-mediated changes in human behavior, social, cultural and political outcomes, and to build the cyber-infrastructure needed for society to persist in its essential character in a cyber-mediated information environment under changing conditions, actual or imminent social cyber-threats. An example is the technology and theory needed to assess, predict and mitigate social influence manipulation and the spread of disinformation by inauthentic actors such as bots, cyborgs, and trolls. Social cybersecurity is a computational social science in which the socio-political context is taken in to account, advanced smart technologies operate along side humans, and operational utility of theory and methods is prized. Given the massive and ongoing changes in human communication and the changing affordances of available technologies, this is area where it is critical to think beyond the boundaries of disciplines and to move to transdisciplinary theories and empirical research.


4:00pm - 4:10pm Break

4:10pm - 5:15pm COVID Conspiracies and Disinformation 2

Studying the Dynamics of COVID-19 Misinformation Themes Thomas Marcoux, Esther Mead, and Nitin Agarwal
Divide in Vaccine Belief in COVID-19
Conversations: Implications for Immunization
Plans
Aman Tyagi
COVID 19, Misinformation and The African American Community Kellon Bubb

5:15pm - 5:25pm Break

5:25pm - 6:35pm: Protests

Detecting Coordinated Behavior in the Twitter Campaign to Reopen America Thomas Magelinski and Kathleen M. Carley
False Claims Hurt: Examining Perceptions of Misinformation Harms during Black Lives Matter Movement Thi Tran, Rohit Valecha, and H. Raghav Rao
Modeling Protester Orchestration through Connective Action: A COVID-19 Lockdown Protest Case Study Billy Spann, Oluwaseun Johnson, Esther Mead, and Nitin Agarwal

 

Moderators and Panelists:

Kathleen M. Carley, IDeaS Co-Director
Kathleen M. Carley is the director of the Center for Computational Analysis of Social and Organizational Systems (CASOS), a university wide interdisciplinary center that brings together network analysis, computer science, and organization science and the director of the Center for Informed Democracy and Social-cybersecurity (IDeaS). Kathleen M. Carley's research combines cognitive science, social networks and computer science to address complex social and organizational problems. Her specific research areas are dynamic network analysis, computational social and organization theory, information and disinformation diffusion, adaptation and evolution, text mining, and the impact of telecommunication technologies and policy on communication, disease contagion and response within and among groups particularly in disaster or crisis situations. She and her lab have developed infrastructure tools for analyzing large scale dynamic networks, social media analytics tools and various agent-based simulation systems.
David Danks, IDeaS Co-Director
David Danks is L.L. Thurstone Professor of Philosophy & Psychology, and Head of the Department of Philosophy, at Carnegie Mellon University. He is also the Chief Ethicist of CMU’s Block Center for Technology & Society; co-director of CMU’s Center for Informed Democracy and Social Cybersecurity (IDeaS); and an adjunct member of the Heinz College of Information Systems and Public Policy, and the Carnegie Mellon Neuroscience Institute. His research interests are at the intersection of philosophy, cognitive science, and machine learning, using ideas, methods, and frameworks from each to advance our understanding of complex, interdisciplinary problems. Danks has examined the ethical, psychological, and policy issues around AI and robotics in transportation, healthcare, privacy, and security.
Conrad Tucker
Dr. Conrad Tucker is an Arthur Hamerschlag Career Development Professor of Mechanical Engineering and Machine Learning (Courtesy) at Carnegie Mellon University. His research focuses on the design and optimization of systems through the acquisition, integration and mining of large scale, disparate data. Dr. Tucker has served as PI/Co-PI on federally/non-federally funded grants from the National Science Foundation (NSF), the Air Force Office of Scientific Research (AFOSR), the Defense Advanced Research Projects Agency (DARPA), the Army Research Laboratory (ARL), the Office of Naval Research (ONR) via the NSF Center for eDesign, and most recently, the Bill and Melinda Gates Foundation (BMGF). In February 2016, he was invited by National Academy of Engineering (NAE) President Dr. Dan Mote, to serve as a member of the Advisory Committee for the NAE Frontiers of Engineering Education (FOEE) Symposium. He received his Ph.D., M.S. (Industrial Engineering), and MBA degrees from the University of Illinois at Urbana-Champaign, and his B.S. in Mechanical Engineering from Rose-Hulman Institute of Technology.
David A. Broniatowski
David A. Broniatowski Ph.D., conducts research in decision making under risk, the design and analysis of complex technological systems, and behavioral epidemiology. This research program draws upon a wide range of techniques including formal mathematical modeling, experimental design, automated text analysis and natural language processing, social and technical network analysis, and big data. His work work on systematic distortions of public opinion about vaccines on social media by state-sponsored trolls has been widely reported in the academic and popular press.
Filippo Menczer
Filippo Menczer is a distinguished professor of informatics and computer science and director of the Observatory on Social Media at Indiana University. He holds a Laurea in Physics from the Sapienza University of Rome and a Ph.D. in Computer Science and Cognitive Science from the University of California, San Diego. Dr. Menczer is an ACM Distinguished Scientist and a board member of the IU Network Science Institute. His research interests span Web and data science, computational social science, science of science, and modeling of complex information networks. In the last ten years, his lab has led efforts to study online misinformation spread and to develop tools to detect and counter social media manipulation.
David Mussington

Dr. David Mussington is the Director of the Center for Public Policy and Private Enterprise and Professor of the Practice at the University of Maryland School of Public Policy.  He is also a non-Resident Senior Fellow of the Center for International Governance Innovation.  Dr. Mussington was an Assistant Director of the Information Technology and Systems Division at the Institute for Defense Analyses (IDA). While at IDA, he led innovative projects on critical infrastructure cybersecurity, information sharing and military cyber doctrine. Additionally, he developed research partnerships with NATO, conducted an evaluation of cybersecurity information sharing programs for the US Department of Homeland Security and directed an analysis for the Office of the Director of National Intelligence of federal information security reporting standards and metrics. Dr. Mussington worked at the White House on the Obama Administration National Security Council Staff as Director for Surface Transportation Security Policy.   He holds a PHD in Political Science from Carleton University and B.A. and M.A. degrees in Economics and Political Science from the University of Toronto.

Leslie Iwerks

Leslie Iwerks is an Academy Award® and Emmy® nominated director and producer who currently serves as the CEO and Creative Director of Iwerks & Co., a Santa Monica based multimedia production company.

Iwerks creates critically acclaimed and award-winning documentaries, features and series that celebrate the genius, risks and rewards of creative visionaries, showcase heartfelt human tales, from the depths of the Guatemalan garbage dumps to the toxic tar sands of Alberta, Canada, and that document the human story of innovation and enterprise globally. An adventure and travel enthusiast, Iwerks has filmed on all seven continents around the world. 

Her body of work encompasses feature films, such as The Pixar Story, Citizen Hearst, Industrial Light & Magic: Creating the Impossible, The Hand Behind the Mouse: The Ub Iwerks Story and League of Legends Origins, acclaimed environmental documentaries, including Recycled Life, Pipe Dreams and Downstream, and most recently, the in-depth docuseries, The Imagineering Story, which debuted on Disney+ in 2019.

Iwerks’ desire to innovate and push boundaries with her filmmaking has been cultivated and inspired by her family upbringing, as her grandfather, Ub Iwerks, was the original designer and co-creator of Mickey Mouse and a multi Academy Award®-winning visual effects pioneer, and her father, Don Iwerks, is also an Academy Award® winner for Technical Lifetime Achievement, and the founder of the large format film company, Iwerks Entertainment, which has built Iwerks large format theaters and projections systems in over 200 theaters around the world. 

fallback.png
Mike Fulk
Mike Fulk has worked in online and social media analytics for 10 years, in the last two years focused on integrating and building capabilities to detect manipulation and threats at scale. He builds high performing teams with deep expertise in publicly available information, data science, machine learning, information integrity, content management, decision support, policy issues, and stakeholder engagement. Mike received a Masters degree in Computer Science from the Georgia Institute of Technology. He has worked at MITRE for the last 10 years, and at Harris Corporation, Lockheed Martin, and BAE Systems prior to MITRE.