Archives

Kirsty Hughes, A Behavioural Understanding of Privacy and its Implications for Privacy Law

Kirsty Hughes, A Behavioural Understanding of Privacy and its Implications for Privacy Law

Comment by: Bruce Boyden

PLSC 2012

Workshop draft abstract:

This article draws upon social interaction theory to develop a theory of the right to privacy.  By drawing upon this literature and adopting a behavioural approach to privacy, we can better understand: how privacy is experienced; the different types of privacy that we experience; when an invasion of privacy occurs; and the social benefits of privacy.

In essence, this article claims that privacy plays a crucial role in facilitating social interaction and that an individual or group experiences privacy when he, she or they successfully employ barriers to obtain or maintain a state of privacy. Under this approach, an invasion of privacy occurs when those barriers are breached and the intruder obtains access to the privacy-seeker.  This article proposes a new theory of privacy, explaining how it differs from existing theories, and how it deals with a number of crucial complex problems, including, threats, attempts and cumulative interferences with privacy.  It reflects on the implications of this analysis for privacy law, in particular: the reasonable expectation of privacy test; the concept of waiver; and the balancing of competing rights and interests.

Bruce Boyden: Can a Computer Intercept Your Email?

Bruce Boyden: Can a Computer Intercept Your Email?

Comment by: Marcia Hofmann

PLSC 2011

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2028980

Workshop draft abstract:

Can a computer spy on you? Obviously a computer can be used to spy on you, but can the computer itself invade your privacy? This question, once perhaps metaphysical, has gained added urgency in recent years as email services such as Google have begun scanning their users’ emails in order to target advertisements and ISPs have begun using filtering to weed out viruses, spam, and most controversially, copyrighted material from their systems. Such automated scanning and handling of electronic communications arguably violates the federal Wiretap Act, which prohibits intentional interception of an electronic communication without the consent of a party.

Interception is defined under the Wiretap Act as “the aural or other acquisition of the contents of any wire, electronic, or oral communication through the use of any electronic, mechanical, or other device.” Courts have long struggled to apply this definition outside of the context of the traditional wiretap or electronic eavesdropping situation. For example, early Wiretap Act cases involving recordings posed the challenge of determining when exactly the moment of interception occurred: when the conversation was recorded, in which case the circumstances of the act of recording would determine whether an interception occurred, or when the conversation was listened to, in which case the circumstances of playback would determine whether there was a violation. In other words, in recording situations, do machines intercept communications or do humans? Courts have generally answered that question by holding that it is machines that accomplish the interception, albeit as one early case put it, they do so as the “agent of the ear.” Subsequent decisions have held recordings or copied communications to be interceptions whether or not they are ultimately listened to or read by humans.

The conclusion that devices intercept, even if it makes sense for the recording context, should not be reflexively applied to all automated handling of communications. Even if “acquisition” can apply to a recording rather than perception, recordings and other copies enable human perception of the contents of a communication. It is the prospect of third-party perception of the contents of a private communication that is the harm the Wiretap Act protects against. Unmoored from that prospect of harm, automated handling of communications does not pose the relevant danger and should not fall within the definition of “acquisition.” It neither carries those contents to a human for perception, nor does it capture them for later perception. Advertising, filtering, blocking, and other actions in which the substance of the communication is not preserved should not be held to be a violation of the ECPA.

Paul Ohm, The Benefits of the Old Privacy: Restoring the Focus to Traditional Harm

Paul Ohm, The Benefits of the Old Privacy: Restoring the Focus to Traditional Harm

Comment by: Bruce Boyden

PLSC 2010

Workshop draft abstract:

The rise of the Internet stoked so many new privacy fears that it inspired a wave of legal scholars to give birth to a new specialty of legal scholarship, Information Privacy law. We should both recognize this young specialty’s great successes and wonder about its frustrating shortcomings. On the one hand, it has provided a rich structure of useful and intricate taxonomies with which to analyze new privacy problems and upon which to build sweeping prescriptions for law and policy.

But why has this important structural work had so little impact on concrete law and policy reform? Has any significant new or amended law depended heavily on this impressive body of scholarship? I submit that none has, which is particularly curious given the way privacy has dominated policy debates in recent years.

In this Article, I propose a theory for why the Information Privacy law agenda has failed to provoke meaningful reform. Building on Ann Bartow’s “dead bodies” thesis,  I argue that Information Privacy scholars gave up too soon on the prospect of relying on traditional privacy harms, the kind of harms embodied in the laws of harassment, blackmail, discrimination, and the traditional four privacy torts. Instead, these scholars have proposed broader theories of harm, arguing that we should worry about small incursions of privacy that aggregate across society, focusing on threats to individual autonomy, deliberative democracy, and human development, among many other values. As the symbol of these types of privacy harms, these scholars have pointed to Bentham’s and Foucault’s Panopticon.

Unfortunately, fear of the Panopticon is unlikely to drive contemporary law and policy for two reasons. First, as a matter of public choice, Panoptic fears are not the kind that spurs legislators to act. Lawmakers want to point to poster children suffering concrete, tangible harm—to Bartow’s dead bodies—before they will be motivated to act. The Panopticon provides none. Second, privacy is a relative, contingent, contextualized, and malleable value. It is weighed against other values, such as security and economic efficiency, so any theory of privacy must be presented in a commensurable way. But the Panopticon is an incommensurable fear. Even if you agree that it represents something dangerous that society must work to avoid, when you place this amorphous fear against any concrete, countervailing value, the concrete will always outweigh the vague.

I argue that we should shift our focus away from the Panopticon and back on traditional privacy harm. We should point to people who suffer tangible, measureable, harm; we should spotlight privacy’s dead bodies.

But this isn’t a call to return meekly back to the types of narrow concerns that gave rise to the traditional privacy torts. Theories of privacy harm should include not only the stories of people who already have been harmed but also rigorous predictions of new privacy harms that people will suffer because of changes to technology.

Ironically, information privacy law scholars who make these kinds of predictions will often propose prescriptions that are as broad and sweeping as some of those made by their Panopticon-driven counterparts. Traditional-harm theories of information privacy aren’t necessarily regressive forms of privacy scholarship, and this Article points to the work of a new wave of information privacy law scholars who are situated in traditional harm but at the same time offer aggressive new prescriptions. From my own work, I revisit the “database of ruin” theory, a prediction that leads to aggressive prescriptions for new privacy protections.

Finally, I argue why this predictive-traditional-harm approach is more likely to lead to political action than the Panoptic approach, recasting prescriptions from some of the classic recent works of Information Privacy into more politically saleable forms by translating them through the traditional harm lens.