The views expressed by contributors are their own and not the view of The Hill

Privacy law needs privacy harm


There is a persisting irony in one of privacy law’s first stories. Bodil Lindqvist was a catechist and maintenance worker in a small Swedish parish. One day, she built a website to update parishioners on the goings-on of the parish, which featured contact information about the parish’s members and informed parishioners that the priest’s availability would be limited as he had injured his foot. This turned out to be a serious mistake on Lindqvist’s part. When this seemingly innocent website came to the Swedish prosecutor’s attention, Lindqvist learned that she breached Swedish and European Union Law by processing people’s personal information without their consent. Worse, she had published sensitive, medical data (a broken foot). For such an oversight, she faced fines and criminal charges. The case was the first sign of the problem that current privacy regulations such as Europe’s General Data Protection Regulation (GDPR) and the new California Consumer Privacy Act (CCPA) had from the start: They ignore privacy harm.

While it frequently shocks law students, privacy law professors often remember Lindqvist’s case as a success: It was the case where the European Court of Justice first acknowledged Europeans’ rights to privacy and data protection. It gave teeth to the legislation that lay the groundwork for the GDPR and would come to be cited in countless data protection cases. But we often forget that the case came at a hefty price. Besides coming at the price of a catechist with good intentions facing disproportionate sanctions, the irony is that it came at the price of solving a problem, in the particular case, that no one had.

In the past, privacy concerns were addressed through “bread and butter” lawsuits. People sued when someone opened their letters, broke into their house and went through their diary, or disclosed details about them breaking professional secrecy. But the internet complicated things by introducing a host of new privacy harms that are considered more difficult to identify than those that our society had dealt with in the past. When a website makes a ghost profile with someone’s name on it but she lacks evidence of reputational damage, courts are unsure of whether to grant her remedy. When a credit bureau is hacked but victims lack evidence that this caused them financial damage, courts are unsure of whether to grant them remedy as well.

To avoid this problem, laws and regulations such as the GDPR move privacy issues from private law to regulation. This approach was at first uniquely European (branded as data protection, not privacy) but is increasingly used in other jurisdictions, and most recently in California by the CCPA.

These regulations largely measure harm through regulated conduct: It does not matter whether a victim was harmed, but whether someone behaved in a way forbidden by the regulation. This approach avoids the seemingly difficult task of identifying privacy harm. But victims in this paradigm fade into the background: Companies are sanctioned for processing people’s personal information without authorization, independently of whether anyone was harmed. When someone is harmed, she can complain to her local data protection authority, but whether and how they investigate is their prerogative. This sometimes leads to over-patrolling individuals when they do not produce harm, and to leaving victims that did suffer harm without recourse.

For this reason, if what our society cares about is preventing people from being harmed, relying solely on current privacy regulations such as the GDPR and the CCPA to prevent it is over-inclusive and under-inclusive at the same time. An example over-inclusion is the procedures that researchers go through also in low-risk situations to obtain approval for performing studies to make discoveries. An example of under-inclusion are the countless meaningless manifestations of consumer ‘consent’ that companies obtain to process their personal data in harmful ways, like the STC box we all checked when signing up for Facebook providing blanket consent to their terms of service.

Privacy harms lead to a host of other harms, such as reputational harm (publishing inaccurate information about a job candidate), financial harm (with identity theft), discriminatory and bodily harm (when a member of a non-visible minority is “outed”), among others. And it also forms a distinct harm that society already recognized with opening of letters and professional secrecy, just in a new technological context: protecting us from others learning intimate facts about us that we do not want them to know. While these are in fact difficult to prove in court, they are even more difficult to predict and prevent. The mistaken belief that addressing these harms is a de-regulatory endeavor is disadvantaging millions of consumers every day.

While some may write this off as technical, it has enormous social consequences. In the last year and a half, there have been numerous reactions to privacy law’s indifference to harm. These harms, which regulations such as the GDPR attempt to but cannot always prevent, are suffered by hundreds of thousands of people who increasingly demand a response when regulations’ preventive function fails. Chiefly, privacy class actions continue to appear in the United States, Canada, and Europe, for events such as the Equifax breach, which in 2018 lost financial information of 1 billion taxpayers, inflicting them privacy harm and risking them financial harm. These lawsuits claim that their victims deserve compensation — that there is a duty to compensate these harms like there was before the internet, independently of whether regulators will issue fines. And courts are slowly starting to recognize it, as they did for example when Six Flag deployed biometric software on children.

Telling these victims that they need not sue because a statute like the GDPR or the CCPA already protects them is the wrong answer.

Privacy law has a problem. It ignores privacy harm. While new and proposed regulations such as the CCPA can help consumers in meaningful ways, they suffer from significant deficiencies in their ability to meaningfully protect people’s privacy by ignoring harm, deficiencies that are in dire need of revisiting. Fixing them requires giving harmed individuals the spotlight; it requires rethinking the prohibition of activities that do not cause harm and stop ignoring those that do by granting the right to sue.

As the U.S. builds on other countries’ past experiences when building its own comprehensive privacy regulations, it may be time to bring back privacy harm as a distinguishing factor. It may be time for the harm-agnostic approach of privacy law to end.

NOTE: This post has been updated from the original to correct Ms. Lindqvist’s country of origin.

Ignacio N. Cofone is an Assistant Professor of Law at McGill University, where he teaches Privacy Law. He is also an Affiliated Fellow of the Yale Law School Information Society Project. Follow him on twitter @IgnacioCofone

Tags California Consumer Privacy Act Data protection General Data Protection Regulation Internet privacy personal data Privacy

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

More Cybersecurity News

See All

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video