Archives

Frederik Zuiderveen Borgesius, A New Regulatory Approach for Behavioral Targeting

Frederik Zuiderveen Borgesius, A New Regulatory Approach for Behavioral Targeting

Comment by: Omer Tene

PLSC 2013

Workshop draft abstract:

Behavioral targeting forms the core of many privacy related questions on the Internet. It is an early example of ambient intelligence, technology that senses and anticipates people’s behavior to adapt the environment to their needs. This makes behavioral targeting a good case study to examine some of the difficulties that privacy law faces in the twenty-first century.

The paper concerns the following question. In the context of behavioral targeting, how could regulation be improved to protect privacy, without unduly restricting the freedom of choice of Internet users?

The paper explores two ways of privacy protection. The first focuses on empowering the individual, for example by requiring companies to obtain informed consent of the individual before data processing takes place. In Europe for instance, personal data “must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law” (article 8 of the Charter of Fundamental Rights of the European Union). The phrase “on the basis of the consent of the person” seems to be a loophole in the regime, as many Internet users click “I agree” to any statement that is presented to them. Insights from behavioral economics cast doubt on the effectiveness of the empowerment approach as a privacy protection measure.

The second approach focuses on protecting rather than empowering the individual. If aiming to empower people is not the right tactic to protect privacy, maybe specific prohibitions could be introduced. Some might say that the tracking of Internet users is not proportional to the purposes of marketers, and therefore should be prohibited

altogether. But less extreme measures can be envisaged. Perhaps different rules could apply to different circumstances. Are data gathered while Internet users are looking to buy shoes, less sensitive than data that reveal which books they consider buying or which online newspapers they read? Do truly innocent data exist?

One of the most difficult issues would be how to balance such prohibitions with personal autonomy, since prohibitions appear to limit people’s freedom of choice. The paper draws inspiration from consumer law, where similar problems arise. When should the law protect rather than empower the individual? A careful balance would have to be struck between protecting people and respecting their  freedom of choice. The paper concludes with recommendations to improve privacy protection, without unduly restricting people’s freedom of choice.

Priscilla M. Regan, Privacy and the Common Good: Revisited

Priscilla M. Regan, Privacy and the Common Good: Revisited

Comment by: Kenneth Bamberger

PLSC 2013

Workshop draft abstract:

In Legislating Privacy: Technology, Social Values, and Public Policy (1995), I argued that privacy is not only of value to the individual but also to society in general and I suggested three bases for the social importance of privacy. First that privacy is a common value in that all individuals value some degree of privacy and have some common perceptions about privacy. Second that privacy is a public value in that it has value to the democratic political process. And, third that privacy is a collective value in that technology and market forces are making it hard for any one person to have privacy without all persons having a similar minimum level of privacy.

In this paper, I will first reflect briefly on the major developments that have affected public policy and philosophical thinking about privacy over the last fifteen plus years. Most prominently, these include: (1) the rather dramatic technological changes in online activities including social networking, powerful online search engines, and the quality of the merging of video/data/voice applications; (2) the rise of surveillance activities in the post-9/11 world; and (3) the rapid globalization of cultural, political and economic activities.  As our everyday activities become more interconnected and seemingly similar across national boundaries, interests in privacy and information policies more generally tend also to cross these boundaries and provide a shared public and philosophical bond.

Then, I will turn attention to each of the three bases for the social importance of privacy reviewing the new literature that has furthered philosophical thinking on this topic, including works by Helen Nissenbaum, Beate Roessler, and Valerie Steeves.

Finally, I will revisit my thinking on each of the three philosophical bases for privacy – expanding and refining what I mean by each, examining how each has fared over the last fifteen years, analyzing whether each is still a legitimate and solid bases for the social importance of privacy, and considering whether new bases for privacy’s social importance have emerged today. In this section, I am particularly interested in developing more fully both the logic behind privacy as a collective value and the implications for viewing privacy from that perspective.

Clare Sullivan, The Proposed Consumer Privacy Bill of Rights –The Australian Experience of its Effectiveness

Clare Sullivan, The Proposed Consumer Privacy Bill of Rights –The Australian Experience of its Effectiveness

Comment by: Scott Mulligan

PLSC 2013

Workshop draft abstract:

This paper examines the Consumer Privacy Bill of Rights proposed by the Obama Administration 2012 as a “blueprint for privacy in the information age,” having regard to Australia’s experience in applying the seven proposed privacy principles.

The same basic privacy principles have applied to most businesses in Australia for over a decade. The Australian experience in implementation and in the operation of the privacy principles, and the ability of the principles to really deal with privacy issues, provides a useful model for assessing the effectiveness of the proposal for the United States.

There are many similarities between the United States and Australia which makes Australia an ideal comparative model. Like the United States, Australia is a federation of States, with the Australian Constitution being based on the United States Constitution. Like the United States, Australia has a federal system of government and a common law legal system. Both countries face the same issues in protecting consumer privacy while also fostering free enterprise.

The paper discusses Australia’s experience in implementing the privacy principles, how Australia has encouraged compliance, and the overall effectiveness of the principles from the perspective of consumers and business. The paper concludes with a discussion of the ability of the principles to deal with present and future privacy issues faced by both Australia and the United States.

Paula Helm, What the Concept of Anonymity in Self-Help Groups Can Teach Us About Privacy

Paula Helm, What the Concept of Anonymity in Self-Help Groups Can Teach Us About Privacy

Comment by: Joseph Hall

PLSC 2013

Workshop draft abstract:

In this paper I’ll confront currently debated privacy theories with empirical data about Alcoholics Anonymous. This will lead me to two arguments: Firstly I’ll point out some serious abridgements concerning the discourse that assumes a natural association of privacy-protection and the value of freedom. Secondly I’ll discuss some requirements that need to be fulfilled before citizens become able to make use of privacy-protection laws in order to feel more free in society. In particular, I’ll discus some basic requirements for people to recover from addiction-diseases. I’ll do this on the basis of my empirical findings. Subsequently I’ll argue for an intermediate step between the association of privacy and freedom. This step can be implemented by differentiating between subjective and social freedom. Subjective freedom in this model is to be understood as the necessary condition for using privacy as a tool for generating social freedom. Having this condition in mind privacy then again can be understood as the necessary condition for building up social freedom.

The data I observe shows that if subjective freedom is not given, privacy-protection rather develops a counterproductive dynamic. As subjective freedom here means not to be captured in an addictive pattern, first and foremost the captivity has to be broken. Therefore addicted people have to go through the process of unclosing their privately kept secrets in order to get help from outside. Only with this help they may develop the power to recover from their addiction. Hence, they first need to give up privacy to gain stability over their pattern, thus building up subjective freedom. In finding subjective freedom in an abstinent life, they can finally use their ‘right to privacy’ for resolving the question of what social freedom may be for them.

For exposing those arguments the basic idea of associating privacy to the value freedom will be contrasted with the relativity of the notion of privacy. The latter can be pointed out by historicization. To this end those historical interpretations will be sketched, which implicitly underlie our emotional responses to ‘privacy’ today.

Against this background a praxis orientated discussion can be introduced. It concerns the effects of the public-private dichotomy when it comes to dealing with crises and dependencies. The concept of addiction will be confronted with its counterpart, the concept of recovery.

On the basis of these considerations the concept of anonymity presented by Alcoholics Anonymous will be introduced. It is the core concept concerning the relationship between privacy and recovery. Anonymity as used and produced by Self-Help Groups can answer the question of how privacy protection can be used by citizens who search for concepts serving for building up subjective freedom within the social grid.

Conclusively we will see why and in which form anonymity can serve as complement for privacy rights advising the idea of differentiating between subjective and social freedom.

Christopher Wolf, Delusions of Adequacy? Examining the case for finding the US adequate for cross-border EU-US data transfers

Christopher Wolf, Delusions of Adequacy?  Examining the case for finding the US adequate for cross-border EU-US data transfers

Comment by: Joel Reidenberg

PLSC 2013

Workshop draft abstract:

The Council and the European Parliament have given the European Commission the power to determine, on the basis of Article 25(6) of directive 95/46/EC whether a third country ensures an adequate level of protection by reason of its domestic law or of the international commitments it has entered into. The effect of such a decision is that personal data can flow from the 27 EU countries and three EEA member countries (Norway, Liechtenstein and Iceland) to that third country without any further safeguard being necessary.  The Commission so far has recognized Andorra, Argentina, Australia, Canada, Switzerland, Faeroe Islands, Guernsey, State of Israel, Isle of Man, Jersey as providing adequate protection.  The Commission has not recognized the privacy framework of the United States as adequate, although it has accepted the US Department of Commerce’s Safe harbor Privacy Principles, and the transfer of Air Passenger Name Record to the United States’ Bureau of Customs and Border Protection.  Despite the elaborate process in the European Union for considering the adequacy of a national privacy framework, the Commission has overlooked essential elements of the US framework that from an objective standard of adequacy should result in a positive finding, and the elimination of burdensome and expensive additional requirements for cross-border transfers of data.  With the possible advent of a new EU General Data Protection Regulation, and the potential for even greater restrictions on cross-border transfers to “non-adequate” nations, this paper reviews the elements of the US framework that entitle the US to a finding of adequacy and shows, with respect to certain data and as to security breaches — one of the most significant threats to privacy — that the US framework is in fact more protective than that in the EU.  The paper acknowledges shortcomings in the frameworks on both sides of the Atlantic, and compares the approaches to improving the frameworks, but concludes that the shortcomings in the US system, real or perceived, are insufficient justification for a refusal by the European Commission to find the US framework adequate.  The paper concludes that the goals of international cooperation, of improving privacy and data protection, of interoperability and of coordinated enforcement will be furthered by the recognition of the US framework as adequate.

Anupam Chander and Uyen P. Le, The Free Speech Foundations of Cyberlaw

Anupam Chander and Uyen P. Le, The Free Speech Foundations of Cyberlaw

Comment by: Vince Polley

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2320124

Workshop draft abstract:

A First Amendment-infused legal culture that prizes speech offered an ideal environment on which to build the speech platforms that make up Web 2.0. Executive, congressional and judicial interventions during the first decade of the World Wide Web manifested a clear desire to protect the speech potential of a media platform that allowed individuals to speak directly to each other outside the confines of newspapers or broadcasters. While interest group pressures were certainly relevant to the legislative process, the First Amendment provided normative force to the arguments of Silicon Valley enterprises, both in the halls of Congress and in the courtroom. A commitment to free speech also helped ward off strict privacy obligations like those imposed in Europe and Asia. Free speech thus establishes the normative foundation of American cyberlaw.

William McGeveran, Privacy, Guns, and Neutral Principles

William McGeveran, Privacy, Guns, and Neutral Principles

Comment by: Larry Rosenthal

PLSC 2013

Workshop draft abstract:

Among the policy debates that arose after the Newtown school shootings, many privacy-related issues appeared. We heard about restrictions on collection and use of data by federal authorities that had been championed by the National Rifle Association. Gun control advocates called for a more comprehensive database to use in background checks of gun purchasers. New York State passed a law increasing reporting requirements for mental health issues. A newspaper posted an online map identifying the homes of gun permit holders. Yet rhetoric often related more to a speaker’s position on underlying gun issues than to any consistent view of the privacy interests at stake. This article will explore the numerous and often quirky privacy rules around firearms. It will then seek to articulate consistent neutral principles for determining the boundaries of privacy interests.

Two major professional experiences prepare me well to take on this project. First, in the mid-1990s I worked extensively on gun control issues for members of Congress including then-Rep. Charles Schumer and for a gun control group. Second, since 2003 I have argued that campaign finance disclosure rules – another liberal cause célèbre – are too cavalier about individual privacy interests. With this background as an advocate of both gun control and privacy, I hope to develop a balanced approach to firearms-related data.

The paper will argue that there are some constitutional privacy concerns, and many additional features of privacy statutes and wise policy, that require restrictions on government collection and processing of data about firearms. At the same time, as the Supreme Court found in its landmark Whalen v. Roe decision, limited and secure government databases for important purposes are permissible, practical, and necessary. Along the way, I also hope to demonstrate how privacy doctrine has yet to generate robust neutral principles. Just as First Amendment doctrine protects diverse speakers on diverse topics, from soapbox orators to Nazi marchers, privacy rules must mature to the point where protection of personal information becomes an important goal, regardless of opinions about the persons or information involved.

Kate Crawford and Jason Schultz, The Due Process Dilemma: Big Data and Predictive Privacy Harms

Kate Crawford and Jason Schultz, The Due Process Dilemma: Big Data and Predictive Privacy Harms

Comment by: Bryan Cunningham

PLSC 2013

Workshop draft abstract:

The rise of “big data” analytics poses new challenges for privacy advocates. Unlike previous computational models that exploit personally identifiable information (PII) directly, such as behavioral targeting, big data often exploits PII indirectly. By analyzing primarily metadata, such as a set of predictive or aggregated findings without displaying or distributing the underlying PII, big data approaches often operate outside of current privacy protections (Rubinstein 2013; Tene and Polonetsky 2012). However, this does not mean big data is without substantial privacy risks. For example, the risks of bias or discrimination based on the inappropriate inclusion or exclusion of personal data about an individual still persists — a risk we call “predictive privacy harm.” Last year, the trans-national retailer Target was shown to be using data mining techniques to predict which female customers were pregnant, even if they had not announced it publicly (Duhigg, 2012). Such predictive analysis and categorization poses a threat for those individuals who are labeled, especially when it is based on underlying PII and performed without either their knowledge or consent.

Currently, individuals have a right to see and review records pertaining to them in areas such as health and credit information. But these existing systems are inadequate to meet current ‘big data’ challenges: FIPS and other notice-and-choice regimes fail to protect against data analytics in part because individuals are rarely aware of how their individual data is being used to their detriment. Therefore, we propose a new approach to predictive privacy harms — that of a right to “data due process.” In the Anglo-American legal tradition, due process prohibits the deprivation of an individual’s rights without affording her access to certain basic procedural components of the adjudication process — including the rights to see and contest the evidence at issue, the right to appeal any adverse decision, the right to know the allegations presented and be heard on the issues they raise. While some current privacy regimes offer nominal due process-like mechanisms, such as the right to audit one’s personal data record, these rarely include all of the necessary components to guarantee fair outcomes and arguably many do not even apply to big data systems. A more rigorous system is needed, particularly given the inherent analytical assumptions and methodological biases built into many big data systems (boyd and Crawford 2012). Applying the concept of due process to big data and and its associated predictive privacy harms, we assert that individuals who are “judged” by big data should have similar rights to those judged by the courts with respect to how their personal data has played a role in such adjudications.


boyd, d and Crawford, K. 2012. “Critical Questions for Big Data”, Information, Communication and Society, Volume 15, no 5, pp 662-679.

Duhigg, Charles. 2012. “How Companies Learn Your Secrets, New York Times, Feb 16, 2012.

Rubinstein, Ira. (Forthcoming). “Big Data: The End of Privacy or a New Beginning?”, International Data Privacy Law.

Tene, Omer & Polonetsky, Jules. (Forthcoming). “Big Data for All: Privacy and User Control in the Age of Analytics”, Northwestern Journal of Technology and Intellectual Property 11

Kevin Bankston and Ashkan Soltani, Tiny Constables, Learned Hands and the Economics of Surveillance: Making Cents Out of US v Jones

Kevin Bankston and Ashkan Soltani, Tiny Constables, Learned Hands and the Economics of Surveillance: Making Cents Out of US v Jones

Comment by: Caren Morrison

PLSC 2013

Published version available here:

Workshop draft abstract:

As was made especially clear at last year’s USvJones.com competition at PLSC, deriving a clear principle from the concurrences in US v Jones finding a “reasonable expectation of privacy” against prolonged GPS tracking is a challenge to say the least.  However, two features of the Alito and Sotomayor opinions point toward a potential path forward:

By concluding that GPS tracking would only violate an expectation of privacy for “most offenses”, both opinions suggest that whether or not the surveillance violates the Fourth Amendment turns on the severity of the crime, and

By focusing on the fact that such extensive surveillance would not have been possible using older technologies (absent a “tiny constable” who could hide in one’s trunk), both opinions suggest that whether a particular electronic surveillance technique is constitutional turns on whether and how it would have been replicable by analog means.

Pulled together, these strands suggest that there is an economic, or at least, calculable element of Fourth Amendment reasonableness: that someone engaging in a garden-variety crime could reasonably expect not to be electronically surveilled using a technique that would have been impossible, or, impossibly costly, using non-technological methods.  Put another way, because Jones could reasonably expect that the severity of his crime would not justify the expense of extended around-the-clock physical surveillance, he therefore had a reasonable expectation of privacy against an automated, electronic version of the same surveillance.

Such an economic balancing would dovetail with how “reasonableness” is judged in the context of negligence torts, as famously articulated by Learned Hand in US v Carroll Towing.  Could it be that US v Jones points towards a similarly economics-based test for reasonableness under the Fourth Amendment?  If so, could it actually make sound policy, or an administrable rule?  And how exactly do the costs of physically tailing someone compare to the costs of tracking them via a GPS device attached to their car—or via cell phone tracking, or even using an airborne drone? Combining legal analysis by Fourth Amendment litigator and advocate Kevin Bankston with digital security expert Ashkan Soltani’s survey of the costs of varying types of surveillance compared to their pre-technological equivalents, this paper will look at the pros and cons of such an approach.