The views expressed by contributors are their own and not the view of The Hill

‘Privacy absolutism’ masks how consumers actually value their data

istock


Ethical scandals in the tech industry seem to follow each other like night and day. Last month, a federal inquiry was opened regarding Google’s program to collect and analyze patient data in collaboration with health care providers, known as Project Nightingale. In August, many were taken aback when Facebook and other tech giants were reported to employ human contractors to listen to users’ messenger conversations; Facebook immediately “paused” the project and gave the usual set of explanations.

But should we really be angry (again) with Facebook?

Though big tech detractors hate the proposition, there is some meat to the idea that people do not care about privacy in the absolute. This can be seen through the following thought experiment.

You and your spouse are sleeping upstairs in your house after a long day of work. Downstairs, the living room and kitchen are a mess. There just wasn’t time to manage it all. During the night, someone sneaks in, takes a photo of the disorder, and then proceeds to clean it up.

What is there to learn here? At first blush, all of us should find the privacy intrusion intolerable. And yet, on further thought, some of us may accept the trade-off of saving money on a cleaning service and enjoying breakfast in a tidy room. The argument that we are willing to trade certain forms of privacy for other benefits is not new — studies have previously found that most people were willing to give their Social Security number in exchange for a 50-cent coupon. But in the current privacy panic, we tend to forget that significant benefit comes in exchange for giving up our personal data. A recent study by Brynjolfsson, Collis & Eggers, looking at consumers’ preferences, estimates that the monthly median benefit for a Facebook user is $48.

More importantly, our thought experiment points to a simple heuristic filter for unproblematic data uses (and, conversely, for problematic ones) — whether or not data are, at some point, analyzed by a moral agent.

Our intolerance to privacy intrusions appears to depend on the ultimate use of the data. Say, in the messy-house illustration above, the photo is used to improve the dataset of a robot-cleaner manufacturer and, in turn, its autonomy in unstructured environments. There is less ground for privacy worry than if the photo is disseminated on social networks.

The upshot? The necessity of absolute privacy for users may not be as unconditional as we hear in public policy discussions.

It is not that hard to understand that no one at Facebook is passing moral judgment on photos of your carbon-intensive vacation, your meat consumption at a restaurant or your latest political rant. And when someone searches Google for “how to avoid taxes,” there’s no need to add, “I’m asking for a friend.” In both cases, there is just a set of algorithms seeing sequences of 1s and 0s. And when humans listen to your conversations with a digital assistant, they’re basically attempting to refine the accuracy of the translating machine that will one day replace them. How different is this from scientists in a laboratory looking at results from a clinical trial?

Once this is understood, efforts to impose privacy regulation on internet platforms should be conditional on subsequent platform interaction with third-party moral agents. The point is supported by the scientific literature. Economists Susan Athey, Christian Catalini and Catherine Tucker have found evidence of the “digital paradox,” in which people say they care about privacy but, when they use social networks, are willing to relinquish private data quite easily when incentivized to do so. Similarly, information scientist Helen Nissebaum writes that “the indignation, protest, discomfit, and resistance to technology-based information systems and practices … invariably can be traced to breaches of context-relative informational norms.”

Though this is easier said than done, one practical way to think about this is to consider use-cases. When data fed to platforms is subsequently made available to moral agents like the press, political actors, law enforcement authorities or the justice system, there should be strict privacy protection.

When no social context or moral agent is involved, as in online commerce or digital currencies, privacy obligations seem less compelling and certainly not absolute. Now, of course, Project Nightingale would become a concern if your private health record is leveraged to sell you mortgages or insurance contracts at extractive prices. But, again, the problem is not lack of privacy — it’s lack of competition.

Note also that it may be tempting to infer that what people care about is privacy vis-à-vis other humans (reality TV notwithstanding). For example, we seem to care less about our privacy with regard to animals. Sapiens, wandering alone in the plains of Africa, placed little if no importance on privacy. In the digital world, however, moral agency is dependent on the context — not the ontology — of the observer.

Many humans observe private personal information with no moral implications, such as doctors, lawyers and priests in the physical world. By contrast, digital agents may implicate intrusive moral agency, as with China’s citizen-scoring system.

On this latter point, a recent Pew Research Center survey found that Americans express more confidence when state actors employ face recognition than when private platforms do so — 56 percent approval for law enforcement, in contrast to 36 percent for tech firms and 18 percent for advertisers. Yet, there are doubts that this remains true if this is practiced at the same scale and intensity as it is in China. Again, privacy is contextual.

Privacy absolutism comes from a world of physical privacy where moral agents are the norm. However, the digital world is different than the physical world. Leaving aside legitimate fears like information leaks, incorrect predictions or systemic externalities that may require appropriate regulations or technical standards, our policymakers must understand that users do not view digital privacy in black and white, but rather in shades of gray.

Bowman Heiden is a visiting fellow at the Hoover Institution at Stanford University. He also serves as co-director of the Center for Intellectual Property (CIP), which is a joint center for knowledge-based business development between University of Gothenburg (Sweden), Chalmers University of Technology (Sweden), and the Norwegian University for Science and Technology.

Nicolas Petit is a visiting fellow at the Hoover Institution at Stanford University. He is also professor of law at the University of Liege (Belgium). His current research focuses on three areas: antitrust and digital economy firms and patent protection as an engine of innovation.

Tags digital privacy Facebook Google personal data Pew Privacy Privacy concerns with social networking services

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

Main Area Top ↴
Main Area Middle ↴
Main Area Bottom ↴

Top Stories

See All

Most Popular

Load more

Video

See all Video