Carnegie Mellon University
October 14, 2020

Experts Discuss Combating Disinformation One Month from the Election

By Julie Mattera

Julie Mattera

Disinformation spreads more than ever before, sowing falsehoods, increasing political polarization and potentially infringing on the democratic process. Experts say disinformation campaigns not only are targeting the 2020 presidential election, but also are focusing on state and local governments, which often are ill prepared to respond.

Whereas most elections and natural disasters result in anywhere between seven to 100 disinformation campaigns, the COVID-19 pandemic has produced thousands, said Carnegie Mellon’s Kathleen M. Carley, a professor in the School of Computer Science's Institute for Software Research.

Carley, who leads Carnegie Mellon’s Center for Informed Democracy & Social - cybersecurity (IDeaS), said these campaigns are increasingly political in nature and often come in the form of anti-U.S. rhetoric from other countries. She said that misinformation has helped increase the partisan divide in the U.S., led to the formation of hate groups and has helped feed the growth of an anti-science sentiment.

“Social media at this point is so infected with disinformation, the classic strategies of simply banning a message or banning a bot do not have the effect that they used to have,” said Carley. “If you ban it on Twitter, it will show up on Facebook. If you ban it on Facebook, it will show up on YouTube and so on.”

Carley’s comments came this week during a virtual roundtable discussion hosted by IDeaS and CMU’s Block Center for Technology and Society. The event, “Combating Disinformation One Month from the Election: What State and Local Policy Makers Can Do,” featured experts from the university, Brookings Institution and the Information Technology and Innovation Foundation (ITIF). The panel discussed issues involving disinformation on social media and how to effectively address them.

Scott Andes, executive director of the Block Center, moderated the discussion. Later in the day, experts also took questions from and discussed disinformation topics with journalists from MIT Technology Review and NBC News.

Panelist Nicol Turner Lee, senior fellow in Governance Studies and the director of the Center for Technology Innovation at the Brookings Institution, said that 2016 disinformation discouraged and, in some instances, successfully kept people of color from going to the polls. Lee said this was an infringement on the civil rights protections of certain groups mandated under the Voting Rights Act.

“That is the reason why I'm sitting here today, as a person who's a technologist whose work undergirds race, social justice and technology,” Lee, who also serves as co-editor-in-chief of TechTank. “It’s important to have these conversations going into this election.”

In addition to foreign actors spreading disinformation, Lee said domestic groups, including white supremacists and even the White House, have been associated with disinformation campaigns. Domestic disinformation has incited people to act in real life, including infiltrating protests following the George Floyd shooting. At the same time, Lee said misinformation involving the pandemic is being used to incite fear of infection among African Americans and Latinos and, in a similar fashion to 2016, discourage them from voting.

Daniel Castro, ITIF vice president and director of the foundation’s Center for Data Innovation, said disinformation also has repeatedly appeared at the local and state level. He referenced Colorado, where legitimate pictures of blue and red ballots were misrepresented this summer, causing people to question whether they should vote by mail when the state has had mail-in-ballots for some time.

“People were … saying that, these different ballots were printed in different colors so that state officials could see if somebody was a registered Republican or registered Democrat and then mishandle their ballot in response,” Castro said.

Castro said deep fakes also are a growing threat as videos become more detailed, realistic and easier to produce.

Yonatan Bisk, assistant professor in the Language Technologies Institute in CMU’s School of Computer Science, said AI can be used to detect deep fakes or hate speech. But detecting content generated by machines is much easier than that generated by humans, which can lack explicit derogatory terms or be based on the juxtaposition of words and context that an AI might miss.

“This is part of a challenge that Facebook is running into right now on detecting hateful memes,” Bisk said. “So, ‘your wrinkle cream is working great’ as a meme on a photo of an alligator is actually incredibly difficult for our system to understand because there’s nothing about the text that indicates that it’s negative.”

How to combat disinformation

Individuals, companies and governments at all levels have a role to play in mitigating disinformation, the panelists agreed. Officials at the local and federal levels have opportunities to combat disinformation on a large scale and provide funding for necessary research.

People can use simple tactics to combat disinformation, Carley said, such as calling it out online when you see it, fact checking information, and building your own trusted network of people for sharing reliable information. Officials also may want to establish a social cybersecurity team to monitor harmful disinformation, respond to it and reduce its impact on the community.

Castro said local officials need to be prepared to respond quickly to misinformation by correcting the record on social media, as well as engaging with local journalists who may be more trusted than government sources. There also are opportunities with technology, including adding metadata to videos, photos and documents online to show its origin and verify legitimacy.

“The most important thing in this space is increasing digital literacy,” Castro said. “We really don't have a good blend of digital literacy and media literacy where we teach people how to consume information online and how to question it. We have seen in other countries that have focused on this in the past that that’s a very effective technique for getting people to be responsible consumers of information so that that human factor isn’t what’s accelerating the spread of the misinformation.”

Bisk said AI also can be used to help with monitoring and identify trending topics that don’t use hashtags.

“If you're a policymaker … these kinds of things should be in your toolbox for keeping a pulse on how things are changing and what people are talking about so that you can be very strategic about your messaging,” Bisk said.