Carnegie Mellon University

As coronavirus necessitates more private-public partnerships, it’s time to confront the trust crisis

By Lauren Prastien

A human crowd forming a virus

Currently, at least 13 countries around the world - including the United States, South Korea, Iran and Italy - are using people’s smartphones to try to stem the spread of COVID-19. In the United States, many of these efforts have taken the form of private-public partnerships with large tech companies, as well as the mobile advertising industry. These partnerships hinge on the use of smartphone geolocation data and other determinants of health to lessen the impact of COVID-19. Additionally, many large tech companies, such as Google and Apple, have also begun to undertake interoperability measures to expedite data-sharing between platforms and operating systems.

The use of mobile data, location history and even tracking devices has opened up a contentious debate about tensions between the protection of privacy rights and the urgency of public health during a pandemic. Just as these uncertain times have thrown such pervasive issues as the lack of consistent broadband availability into sharp relief, they have also further emphasized the public’s crisis of trust with the large tech companies whose products have become integral to the ways we live, work, and interact with each other.

On the precipice of what many worry could mark a new era of mass surveillance, the coronavirus response could be a watershed moment in the responsible use of private data for public good.

In January of this year, the World Health Organization released a bulletin on their COVID-19 response strategy that declared, “rapid data sharing is the basis for public health action.” Currently, tech companies’ massive data collection apparatuses and technical knowhow make them strong candidates to assist in these efforts. (For instance, a massive uptick in Google searches for the phrase “I can’t smell” in states with a high prevalence of COVID-19 helped epidemiologists and health officials learn more about the disease’s symptoms and determine new outbreak epicenters.) However, a recent survey found that while 54% of Americans expressed interest in sharing their health status data with government health officials, only 21% of respondents were comfortable with sharing anonymized diagnostic data with an app. In other words, there is a significant lack of public trust in the large tech companies that could be well-suited to execute data-driven responses to COVID-19. 

It’s important to keep in mind that much of the data currently being collected for contact tracing and other interventions into COVID-19 is not new data. According to Tom Mitchell, a professor in the School of Computer Science at Carnegie Mellon University, these solutions revolve more around creatively combining existing data sets.

“The kind of reaction that I’ve gotten is generally a fear of privacy invasion,” says Mitchell. “People would say, ‘oh, but then you'd have a system collecting data about me.’ Well, we already have those systems. They are collecting the data.” 

Mitchell is interested in using existing geolocation data with other resources to swiftly inform individuals of potential exposure to epidemics like COVID-19, such as combining emergency room admissions data and credit card data to inform individuals who may have been in close proximity to an individual diagnosed with the coronavirus.

“That data is there,” says Mitchell. “It's just not being used for public health purposes.”

Today, geolocation data from smartphones is used to do things like update live traffic patterns on platforms like Google Maps – so yes, that’s where the red lines come from, it’s all just based on which smartphones are stuck sitting in traffic - or determining peak business hours on those little bar graphs you see anytime you Google a business or a location - again, just smartphones standing in line at the supermarket.

But these methods typically require some form of voluntary participation. Just as people can decide not to use a credit card at the store, they can also decide to shut off their mobile apps’ location services if they don’t want to make that trade-off between privacy and convenience. If citizens don’t trust these companies - and perhaps more significantly, the government - to use this data appropriately and not use these digital platforms to cause harm, none of this works.

Today, the stakes of the trust crisis in tech have been amplified by the necessity of anonymized data to inform the COVID-19 response. If someone does not trust Facebook after the Cambridge Analytica scandal, for instance, that means Facebook lost a customer. But if we need these tech companies to serve a public health mission, the lack of trust in these companies becomes a much more significant social and public health issue. 

A promising area to address this trust crisis could be the regulation of how this data is used for commercial purposes.

“I think there are many people who would say, ‘if I can trust the privacy protections that are going to be built in and how that data will be shared, then I'm going to be willing to allow my data to be used for any kind of nonprofit, pure research while not allowing it to be used for marketing purposes,’” explains Henry Kautz, the Division Director for Information & Intelligent Systems at the National Science Foundation and a professor of computer science at the University of Rochester. 

As we begin to form more private-public partnerships in the area of public health, it’s time for policymakers to define clearer guardrails on the appropriate, context-specific use of data. Particularly as the public becomes increasingly wary of the commercialization of personal information, setting up firewalls now will not only incentivize participation in data-sharing to address COVID-19, it will also ensure that these protections will remain in place post-coronavirus.

Want to learn more?

A scene of Washington DC

If we want AI to help local governments, we need a national plan

Local governments share many of the same problems. They should also have the opportunity to share proven AI-assisted solutions.

Read the full article

A robot in a factory

The societal cost of (mis)predicting the automation of work

While it's understandable to view automation and AI as existential threats to the American worker, the realities of industry disruption are somewhat less provocative than the popular narrative.

Read the full article

A blurred hospital scene

Public trust in data could have helped China contain the coronavirus

New crowd-sourced technologies can help us adapt more quickly to global health emergencies. Lack of public trust hurts those efforts.

READ MORE

This article first appeared in The Hill.