Carnegie Mellon University
July 19, 2018

TheirSpace: How the Facebook Privacy Debate Raises Ethical Questions for the Virtual Age

By John Hooker, professor of operations research and T. Jerome Holleran Professor of Business Ethics and Social Responsibility

Each time a user creates a new account on Facebook, the slogan “It’s free and always will be” appears right above the spaces for entering a first and last name.

But recent revelations about Cambridge Analytica’s cozy arrangement to exploit Facebook users’ data has left many people realizing that “free” is a relative term, and not all currency is measured in dollars and cents.

While Mark Zuckerberg’s appearance before Congress might answer a few questions specific to Facebook’s practices, it should raise broader concerns about how we want to be able to conduct our virtual lives, and how much freedom we’re willing to concede to the next prodigy who creates an intriguing new technology.

Historically, we Americans are somewhat lax about protecting ourselves from our own inventions. Ford first mass-produced automobiles in 1908, but many people drove them recklessly without the constraint of traffic laws, and the federal government did not require cars to carry seat belts for another 60 years. And while X-rays were first used around the turn of the century, often leading to death or serious injury from electrocution and radiation burns, safe practices would take decades to develop.

Although Facebook is bearing the brunt of public outrage now, since their inception, social media have never been private. Even early platforms such as MySpace collected user data and exploited it for marketing purposes; it might as well have been called TheirSpace. Online sites soon began using sophisticated data mining algorithms to connect the dots and build user profiles. They may know more about your life than you do. The current situation merely points to a possible tipping point at which regulators may now finally be spurred to action. The sheen of social media is finally wearing off, and we are beginning to grow up and get serious about the possible ill effects of this technology.

In the study of business ethics, we typically ask three basic questions to determine whether a company’s practice is unethical. The first is what ethicists call a generalization test, which in this context asks: are people deceived? To deceive is to cause someone to believe something you know is false. So we must ask whether online sites have caused people to believe their information is more protected than it actually is. The answer is almost certainly yes. Sophisticated users might know about the extent of data harvesting, but others do not, and they are certainly unaware of how clever mathematical techniques create online dossiers. This is why so many people are shocked by recent exposures of Facebook’s practices.

The second question is: what is the net effect, on balance, of the business practice? Does it maximize benefits while reducing negative effects on the greatest number of people? This principle, introduced by English philosopher Jeremy Bentham in the late 18th century, is known as utilitarianism. In this instance, it’s difficult to make a clear call about the net effect of Facebook’s practices. Targeted ads can be useful; on the other hand, large amounts of data sitting on someone else’s server can create hacking targets.

The third question is: does the practice violate people’s autonomy? Constant virtual surveillance has the potential of doing so.  It is hard to be an autonomous human being if, for example, someone else can peek inside your brain and scrutinize your every thought. Online surveillance has not gone this far, but it is eerily reminiscent of the classic concept of the Panopticon, also conceived by Jeremy Bentham.  This was an imagined system of control in which inmates of an institution could be watched at any time without knowing when they are being watched. Bentham boasted that it was “a new mode of obtaining power of mind over mind.” As we live more and more of our lives online, data harvesting threatens to become the Panopticon of our electronic age. Even if we have not reached that level of virtual incarceration, it we must bear the possibility in mind and work to prevent it.

So how do we regulate technology companies in a way that preserves consumer privacy? For starters, transparency would go a long way toward achieving this goal. Current privacy settings are too often designed to be confusing and opaque. Government regulation could compel companies like Facebook to be honest and forthright about what they are doing with user data.

The European Union is already far ahead of the United States in terms of data privacy protection, beginning with the 1995 Data Protection Directive. The new General Data Protection Regulation, adopted in 2016 and set for formal implementation next month, establishes a much broader spectrum of protections. For example, there is a “right to erasure” – meaning a person can request their personal data be deleted from an organization, and companies must obtain explicit consent to process data.

As we move forward into the next phase of the digital age, when neighborhoods are pages on a screen, we must work to keep them as safe as we want our actual streets to be. We must insist that online operators live up to their ethical responsibility, and draw on the best practices of other developed nations, to ensure that we remain in charge of the data that describe our lives.