Monthly Archives: May 2013

Bryan Choi, The Tax Loophole to Constitutional Privacy

Bryan Choi, The Tax Loophole to Constitutional Privacy

Comment by: Derek Bambauer

PLSC 2013

Workshop draft abstract:

Even as the third party doctrine has come under sharp criticism in Fourth Amendment jurisprudence, an eerily similar workaround has been developing under the Fifth Amendment. The third party doctrine grew out of a tax enforcement case that held that a taxpayer has no reasonable expectation of privacy in financial records held by a third party such as a bank. That rule was later generalized to phone records and any other information held by a third party.

Likewise, a recent set of tax enforcement cases in the courts of appeals (5th, 7th, 9th) has held that taxpayers are not entitled to invoke the Fifth Amendment privilege against self-incrimination in order to withhold statutorily required records of offshore bank accounts. In essence, the reasoning adopted by those courts is that, if the records are required to be kept by the defendant, then the government already knows they exist and the compelled disclosure of those records is not incriminating — unless their very existence would indicate criminal activity. The fact that the contents of those records might be incriminating is irrelevant.

This case study provides an opportunity to reevaluate the controversial “required records” doctrine, as well as to revisit the long-running scholarly debate regarding the overlapping roles of the Fourth and Fifth Amendments in safeguarding individual privacy from governmental intrusion. In isolation, the tax enforcement cases seem innocuous enough. Yet, In future cases, the required records doctrine could easily be extended to phone records and other information of governmental interest, in the same manner as the third party doctrine. If we think the third party doctrine has gone too far, we should be wary of retracing its steps under a different guise.

Lilian Edwards and Edina Harbinja, Post mortem privacy: a comparative perspective

Lilian Edwards and Edina Harbinja, Post mortem privacy: a comparative perspective

Comment by: Deven Desai

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2267388

Workshop draft abstract:

Privacy is the hot topic of the moment in both the EU and US, yet debates around legal intervention to protect privacy have so far in the main been restricted to protecting the living data subject.  Common law jurisdictions such as the US and England and Wales have a long tradition of restricting rights seen as personal to the deceased to enforcement by them during lifetime, hence eg restrictions on bringing libel, personality and emotional distress claims after death by representatives. Civilian legal systems on the other hand are traditionally much more respectful of the interests of the deceased, especially the creator, in preserving their reputation and the integrity of their creations after death. EU data protection laws are generally interpreted not to protect the personal data of the dead,  though a  few countries such as Estonia and Bulgaria have ventured into this territory, and the proposed reforms of the draft DP Regulation make it more not less likely that protection will be restricted to the living.

In the meantime however a parallel slow burning debate has ignited concerning preservation and transmission of emails, social media profiles and other so-called digital assets after death. These issues are largely ruled currently by the law of contract as dictated by standard form service provider terms of service, but judicial and legislative interest is emerging on both sides of the Atlantic. As in paradigm cases such as In re Ellsworth and In re Facebook, digital assets on death disputes  tend to display a patent conflict between the wishes – and sometimes the economic interests of the living – and the privacy rights of the dead.  This paper argues that the debate over the commodification and rights of ownership and control over digital assets on death has the potential to illuminate and ignite a debate over whether privacy interests do and indeed should end or wither on death. The debate will be illustrated with comparisons from the law of personality rights, moral rights, defamation and organ donation.


No. 2005-296, 651-DE (Mich. Prob. Ct. 2005). See discussion in Baldas T. “Slain Soldier’s E-Mail Spurs Legal Debate: Ownership of Deceased’s Messages at Crux of Issue”, 27 Nat’l L.J. 10, 10 (2005)

In re Request for order requiring Facebook, inc. to produce documents and things, Case No: C 12-80171 LHK (PSG), 9/20/201 .

Babak Siavoshy, Fourth Amendment Regulation of Information Processing

Babak Siavoshy, Fourth Amendment Regulation of Information Processing

Comment by: Stephen Henderson

PLSC 2013

Workshop draft abstract:

When (if ever) should the processing, analysis, or manipulation of evidence—rather than its mere collection—by the government trigger the Fourth Amendment?  This essay addresses some of the difficult line-drawing problems that arise from this question.

The Fourth Amendment protects the people from unreasonable government searches and seizures of persons, houses, papers, and effects.  Increasingly, however, government entities gather information not by rummaging through peoples’ things, but rather by using technology to process, analyze, or manipulate evidence or information that is already in the government’s hands or otherwise exposed.  For instance, the government may uncover information about a person by analyzing DNA he “abandoned” on the sidewalk or a discarded coffee cup; it might learn what happens in his house by processing the heat signatures emanating from its walls; or it might learn his habits by stringing together the pattern of his “public” movement using thousands of data points from cameras, government weather satellites, or automatic license plate readers. In each of these cases, the physical form of what is collected—DNA, heat, or visual information exposed to the public—is either exposed or already in the government’s hands.  It is the government’s use of technology to process, analyze, and enhance what is collected that makes the evidence useful, and that raises potential privacy concerns.

One response to these developments—perhaps representing the conventional wisdom—is that there are few, if any, constitutional limits on the government’s ability to manipulate evidence it could otherwise legally obtain.  Advocates of this position correctly note that judicially imposed limitations on information processing create difficult line drawing problems (how do we distinguish between acceptable information processing and unacceptable information processing?) and risk tying the hands of law enforcement by arbitrarily restricting the use of technology in investigations.  Accordingly, the conventional wisdom makes a strong argument that the government’s use of technology to manipulate, process, or analyze evidence—where there is no obvious collection problem—does not and should not trigger the Fourth Amendment.

This essay argues that the conventional wisdom on information processing under the Fourth Amendment is both misplaced and overstated.  It is misplaced because it adopts a wooden construction and application of the Fourth Amendment (an otherwise flexible provision) and one that risks significantly undermining the Amendment’s effectiveness and purpose, particularly in light of advancements in technology that allow the government to get the information it wants without engaging in conduct that looks like a Fourth Amendment search or seizure.  The conventional wisdom on information processing is also overstated, because it assumes that courts have hereto been unwilling to impose constitutional limitations on information processing conduct by the government.  In fact, information the issue is not new to the courts.  The judges and justices who shape Fourth Amendment law have grappled with what is essentially technologically-enhanced information processing conduct in cases as varied as Kyllo v. United States, Skinner v. Railway Executives Labor Association, Walter v. United States, United States v. Jones, and even Katz v. United States.  An overview of these and other cases suggests, first, that courts are willing to impose Fourth Amendment limitations on some information-processing conduct—or at the very least, that courts acknowledge that such conduct raises a Fourth Amendment question.  Second, it suggests a number of different solutions to the legitimate line drawing and other concerns raised by advocates of the view that information processing should not, by itself, trigger the Fourth Amendment. While there are no perfect solutions, the essay suggest a theoretical framework and a path forward for evaluating the Fourth Amendment implications of the increasing use of technologically-enhanced information processing by the government.

Andrea M. Matwyshyn, Talking Data

Andrea M. Matwyshyn, Talking Data

Comment by: Andrew Selbst

PLSC 2013

Workshop draft abstract:

In the wake of Sorrell v. IMS Health, open questions remain regarding the limitations on privacy regulation imposed by the First Amendment.    A conceptual classification problem that is simultaneously also visible in other bodies of law has crept into the intersection of privacy and the First Amendment:  confusion over when (or whether) a data aggregator’s code (and its attached information) is a type of expressive, socially-embedded act of communication or a type of free-standing communicative yet regulable “product.”   This article argues that although the statute at issue in Sorrell failed First Amendment scrutiny, privacy regulation which restricts onward transfer of databases of consumer information – even transfers of anonymized data – if carefully crafted, can pass First Amendment scrutiny.    Through blending doctrinal First Amendment principles with the fundamental tenets of human subjects research protection imposed by Congress and the Department of Health and Human Services,  this article explains the doctrinal limits of the First Amendment on future consumer privacy laws and offers an example of a possible First Amendment-sensitive approach to protecting consumer privacy in commercial databases.

David Gray & Danielle Citron, A Technology-Centered Approach to Quantitative Privacy

David Gray & Danielle Citron, A Technology-Centered Approach to Quantitative Privacy

Comment by: Harry Surden

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2129439

Abstract:

We are at the cusp of a historic shift in our conceptions of the Fourth Amendment driven by dramatic advances in technologies that continuously track and aggregate information about our daily activities.  The Fourth Amendment tipping point was marked this term by United States v. Jones.  There, law enforcement officers used a GPS device attached to Jones’s car to follow his movements for four weeks.  Although Jones was resolved on narrow grounds, five justices signed concurring opinions defending a revolutionary proposition: that citizens have Fourth Amendment interests in substantial quantities of information about their public or shared activities, even if they lack a reasonable expectation of privacy in each of the constitutive particulars. This quantitative approach to the Fourth Amendment has since been the focus of considerable debate.  Among the most compelling challenges are identifying its Fourth Amendment pedigree, describing a workable test for deciding how much information is enough to trigger Fourth Amendment interests, and explaining the doctrinal consequences.  This Article takes up these challenges.

Our analysis and proposal draw upon insights from information privacy law.  Although information privacy law and Fourth Amendment jurisprudence share a fundamental interest in protecting privacy interests, these conversations have been treated as theoretically and practically discrete.  This Article ends that isolation and the mutual exceptionalism that it implies.  As information privacy scholarship suggests, technology can permit government to know us in unprecedented and totalizing ways at great cost to personal development and democratic institutions.  We argue that these concerns about panoptic surveillance lie at the heart of the Fourth Amendment as well.  We therefore propose a technology-centered approach to measuring and protecting Fourth Amendment interests in quantitative privacy.  As opposed to proposals for case-by-case assessments of information “mosaics,” which have so far dominated the debate, we argue that government access to technologies capable of facilitating broad programs of continuous and indiscriminate monitoring should be subject to the same Fourth Amendment limitations applied to physical searches.

Frederik Zuiderveen Borgesius, A New Regulatory Approach for Behavioral Targeting

Frederik Zuiderveen Borgesius, A New Regulatory Approach for Behavioral Targeting

Comment by: Omer Tene

PLSC 2013

Workshop draft abstract:

Behavioral targeting forms the core of many privacy related questions on the Internet. It is an early example of ambient intelligence, technology that senses and anticipates people’s behavior to adapt the environment to their needs. This makes behavioral targeting a good case study to examine some of the difficulties that privacy law faces in the twenty-first century.

The paper concerns the following question. In the context of behavioral targeting, how could regulation be improved to protect privacy, without unduly restricting the freedom of choice of Internet users?

The paper explores two ways of privacy protection. The first focuses on empowering the individual, for example by requiring companies to obtain informed consent of the individual before data processing takes place. In Europe for instance, personal data “must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law” (article 8 of the Charter of Fundamental Rights of the European Union). The phrase “on the basis of the consent of the person” seems to be a loophole in the regime, as many Internet users click “I agree” to any statement that is presented to them. Insights from behavioral economics cast doubt on the effectiveness of the empowerment approach as a privacy protection measure.

The second approach focuses on protecting rather than empowering the individual. If aiming to empower people is not the right tactic to protect privacy, maybe specific prohibitions could be introduced. Some might say that the tracking of Internet users is not proportional to the purposes of marketers, and therefore should be prohibited

altogether. But less extreme measures can be envisaged. Perhaps different rules could apply to different circumstances. Are data gathered while Internet users are looking to buy shoes, less sensitive than data that reveal which books they consider buying or which online newspapers they read? Do truly innocent data exist?

One of the most difficult issues would be how to balance such prohibitions with personal autonomy, since prohibitions appear to limit people’s freedom of choice. The paper draws inspiration from consumer law, where similar problems arise. When should the law protect rather than empower the individual? A careful balance would have to be struck between protecting people and respecting their  freedom of choice. The paper concludes with recommendations to improve privacy protection, without unduly restricting people’s freedom of choice.

Priscilla M. Regan, Privacy and the Common Good: Revisited

Priscilla M. Regan, Privacy and the Common Good: Revisited

Comment by: Kenneth Bamberger

PLSC 2013

Workshop draft abstract:

In Legislating Privacy: Technology, Social Values, and Public Policy (1995), I argued that privacy is not only of value to the individual but also to society in general and I suggested three bases for the social importance of privacy. First that privacy is a common value in that all individuals value some degree of privacy and have some common perceptions about privacy. Second that privacy is a public value in that it has value to the democratic political process. And, third that privacy is a collective value in that technology and market forces are making it hard for any one person to have privacy without all persons having a similar minimum level of privacy.

In this paper, I will first reflect briefly on the major developments that have affected public policy and philosophical thinking about privacy over the last fifteen plus years. Most prominently, these include: (1) the rather dramatic technological changes in online activities including social networking, powerful online search engines, and the quality of the merging of video/data/voice applications; (2) the rise of surveillance activities in the post-9/11 world; and (3) the rapid globalization of cultural, political and economic activities.  As our everyday activities become more interconnected and seemingly similar across national boundaries, interests in privacy and information policies more generally tend also to cross these boundaries and provide a shared public and philosophical bond.

Then, I will turn attention to each of the three bases for the social importance of privacy reviewing the new literature that has furthered philosophical thinking on this topic, including works by Helen Nissenbaum, Beate Roessler, and Valerie Steeves.

Finally, I will revisit my thinking on each of the three philosophical bases for privacy – expanding and refining what I mean by each, examining how each has fared over the last fifteen years, analyzing whether each is still a legitimate and solid bases for the social importance of privacy, and considering whether new bases for privacy’s social importance have emerged today. In this section, I am particularly interested in developing more fully both the logic behind privacy as a collective value and the implications for viewing privacy from that perspective.

Clare Sullivan, The Proposed Consumer Privacy Bill of Rights –The Australian Experience of its Effectiveness

Clare Sullivan, The Proposed Consumer Privacy Bill of Rights –The Australian Experience of its Effectiveness

Comment by: Scott Mulligan

PLSC 2013

Workshop draft abstract:

This paper examines the Consumer Privacy Bill of Rights proposed by the Obama Administration 2012 as a “blueprint for privacy in the information age,” having regard to Australia’s experience in applying the seven proposed privacy principles.

The same basic privacy principles have applied to most businesses in Australia for over a decade. The Australian experience in implementation and in the operation of the privacy principles, and the ability of the principles to really deal with privacy issues, provides a useful model for assessing the effectiveness of the proposal for the United States.

There are many similarities between the United States and Australia which makes Australia an ideal comparative model. Like the United States, Australia is a federation of States, with the Australian Constitution being based on the United States Constitution. Like the United States, Australia has a federal system of government and a common law legal system. Both countries face the same issues in protecting consumer privacy while also fostering free enterprise.

The paper discusses Australia’s experience in implementing the privacy principles, how Australia has encouraged compliance, and the overall effectiveness of the principles from the perspective of consumers and business. The paper concludes with a discussion of the ability of the principles to deal with present and future privacy issues faced by both Australia and the United States.

Paula Helm, What the Concept of Anonymity in Self-Help Groups Can Teach Us About Privacy

Paula Helm, What the Concept of Anonymity in Self-Help Groups Can Teach Us About Privacy

Comment by: Joseph Hall

PLSC 2013

Workshop draft abstract:

In this paper I’ll confront currently debated privacy theories with empirical data about Alcoholics Anonymous. This will lead me to two arguments: Firstly I’ll point out some serious abridgements concerning the discourse that assumes a natural association of privacy-protection and the value of freedom. Secondly I’ll discuss some requirements that need to be fulfilled before citizens become able to make use of privacy-protection laws in order to feel more free in society. In particular, I’ll discus some basic requirements for people to recover from addiction-diseases. I’ll do this on the basis of my empirical findings. Subsequently I’ll argue for an intermediate step between the association of privacy and freedom. This step can be implemented by differentiating between subjective and social freedom. Subjective freedom in this model is to be understood as the necessary condition for using privacy as a tool for generating social freedom. Having this condition in mind privacy then again can be understood as the necessary condition for building up social freedom.

The data I observe shows that if subjective freedom is not given, privacy-protection rather develops a counterproductive dynamic. As subjective freedom here means not to be captured in an addictive pattern, first and foremost the captivity has to be broken. Therefore addicted people have to go through the process of unclosing their privately kept secrets in order to get help from outside. Only with this help they may develop the power to recover from their addiction. Hence, they first need to give up privacy to gain stability over their pattern, thus building up subjective freedom. In finding subjective freedom in an abstinent life, they can finally use their ‘right to privacy’ for resolving the question of what social freedom may be for them.

For exposing those arguments the basic idea of associating privacy to the value freedom will be contrasted with the relativity of the notion of privacy. The latter can be pointed out by historicization. To this end those historical interpretations will be sketched, which implicitly underlie our emotional responses to ‘privacy’ today.

Against this background a praxis orientated discussion can be introduced. It concerns the effects of the public-private dichotomy when it comes to dealing with crises and dependencies. The concept of addiction will be confronted with its counterpart, the concept of recovery.

On the basis of these considerations the concept of anonymity presented by Alcoholics Anonymous will be introduced. It is the core concept concerning the relationship between privacy and recovery. Anonymity as used and produced by Self-Help Groups can answer the question of how privacy protection can be used by citizens who search for concepts serving for building up subjective freedom within the social grid.

Conclusively we will see why and in which form anonymity can serve as complement for privacy rights advising the idea of differentiating between subjective and social freedom.