Monthly Archives: May 2013

Peter Winn, Katz and the Origins of the “Reasonable Expectation of Privacy” Test

Peter Winn, Katz and the Origins of the “Reasonable Expectation of Privacy” Test

Comment by: Orin Kerr

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1291870

Workshop draft abstract:

The “reasonable expectation of privacy” test, formulated in the 1967 case of Katz v. United States,  represents a great touchstone in the law of privacy.  Katz is important not only because the test is used to determine when a governmental intrusion constitutes a “search” under the Fourth Amendment; but because the test has also found its way into state common law, statutes and even the laws of other nations.

This article addresses the historical background of the framing of that decision, argues that the credit for the development of the famous test belongs to counsel for Charles Katz, Harvey (now Judge) Schneider, who presented the test for the first time in his oral argument, not in the briefs.  The majority opinion’s  failure to mention the test is explained by the fact that the law clerk responsible for drafting Justice Stewart’s majority opinion missed the oral argument.  The test, of course, was articulated in Justice Harlan’s short concurring opinion – establishing him as not only a great jurist, but someone who knew how to listen.  Finally, the article argues that the famous test was intended by Justice Harlan to represent more of an evolutionary modification of the previous trespass standard, not a revolutionary new approach to the law – in fact, exactly how subsequent courts understood and applied the standard.

Jane Winn, Privacy By Design

Jane Winn, Privacy By Design

Comment by: Jane Winn

PLSC 2009

Workshop draft abstract:

“Privacy enhancing technologies” have been discussed for years by privacy advocates as a possible strategy for enhancing compliance with information privacy laws, but to date, none have ever had any significant impact on the way information technology is actually used.  This paper will suggest that the focus on “privacy enhancing technologies” is misguided because it reifies the social relationships that result in the production and distribution of information processing technologies.  In 2008, the Article 29 Working Party introduced the concept of “privacy by design” in its analysis of search engine information privacy practices, but did not elaborate on the meaning of this concept.  This paper will suggest that if “privacy by design” is interpreted as referring to the use of “adaptive management systems” in the design and distribution of information technology, then it would represent significant progress toward a more effective regulatory regime for information privacy.  Adaptive management systems are a widely used form of social regulation designed to permit dynamic identification and management of a wide range of health and safety risks.  Such “light touch” forms of regulation of upstream production and distribution of information processing technologies are more likely to enhance compliance with information privacy laws than a narrow focus on the features of products available to end users in downstream markets.

Alan Westin, Historical Perspectives on Privacy: From the Hebrews and Greeks to the American Republic

Alan Westin, Historical Perspectives on Privacy: From the Hebrews and Greeks to the American Republic

Comment by: Alan Westin

PLSC 2009

Workshop draft abstract:

1. Can we define and conceptualize privacy across three thousand years of Western political history and, if so, what are the critical measures?

2. What are the constant elements in the privacy arena that are still critical to privacy dynamics today and what are the new elements of privacy values and struggles generated by advanced technology and the computer age?

Peter Swire, Peeping

Peter Swire, Peeping

Comment by: James Rule

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1418091

Workshop draft abstract:

There have been recent revelations of “peeping” into the personal files of celebrities. Contractors for the U.S. State Department looked at passport files, without authorization, for candidates Barack Obama and John McCain.  Employees at UCLA Medical Center and other hospitals have recently been caught looking at the medical files of movie stars, and one employee received money from the National Enquirer to access and then leak information.  In the wake of these revelations, California passed a statute specifically punishing this sort of unauthorized access to medical files.

This article examines the costs and benefits of laws designed to detect and punish unauthorized “peeping” into files of personally identifiable information. Part I looks at the history of “peeping Tom” and eavesdropping statutes, examining the common law baseline.  Part II examines the current situation.  As data privacy and security regimes become stricter, and often enforced by technological measures and increased audits, there will be an increasing range of systems that detect such unauthorized use.  Peeping is of particular concern where the information in the files is especially sensitive, such as for tax, national security, intelligence, and medical files.

The remedy for peeping is a particularly interesting topic.  Detection of peeping logically requires reporting of a privacy violation to someone.  The recipient of notice, for instance, could include: (1) a manager in the hospital or other organization, who could take administrative steps to punish the perpetrator; (2) a public authority, who would receive notice of the unauthorized use (“peeping”); and/or (3) the individual whose files have been the subject of peeping.  For the third category, peeping could be seen as a natural extension of current data breach laws, where individuals receive notice when their data is made available to third parties in an unauthorized way.  An anti-peeping regime would face issues very similar to the debates on data breach laws, such as what “trigger” should exist for the notice requirement, and what defenses or safe harbors should exist so that notice is not necessary.

Daniel Solove & Neil Richards, Rethinking Free Speech and Civil Liability

Daniel Solove & Neil Richards, Rethinking Free Speech and Civil Liability

Comment by: Raymond Ku

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1355662

Workshop draft abstract:

One of the most important and unresolved quandaries of First Amendment jurisprudence involves when civil liability for speech will trigger First Amendment protections.  When speech results in civil liability, two starkly opposing rules are potentially applicable.  Since New York Times v. Sullivan, the First Amendment requires heightened protection against tort liability for speech, such as defamation and invasion of privacy.  But in other contexts involving civil liability for speech, the First Amendment provides virtually no protection.  According to Cohen v. Cowles, there is no First Amendment scrutiny for speech restricted by promissory estoppel and contract.  The First Amendment rarely requires scrutiny when property rules limit speech.

Both of these rules are widely-accepted.  However, there is a major problem – in a large range of situations, the rules collide.   Tort, contract, and property law overlap significantly, so formalistic distinctions between areas of law will not adequately resolve when the First Amendment should apply to civil liability.  Surprisingly, few scholars and jurists have recognized or grappled with this problem.

The conflict between the two rules is vividly illustrated by the law of confidentiality.  People routinely assume express or implied duties not to disclose another’s personal information.  Does the First Amendment apply to these duties of confidentiality?  Should it?  More generally, in cases where speech results in civil liability, which rule should apply, and when?  The law currently fails to provide a coherent test and rationale for when the Sullivan or Cohen rule should govern. In this article, Professors Daniel J. Solove and Neil M. Richards contend that the existing doctrine and theories are inadequate to resolve this conflict.  They propose a new theory, one that focuses on the nature of the government power involved.

Christopher Soghoian, Caught in the Cloud: Privacy, Encryption, and Government Back Doors in the Web 2.0 Era

Christopher Soghoian, Caught in the Cloud: Privacy, Encryption, and Government Back Doors in the Web 2.0 Era

Comment by: Michelle Finneran Dennedy

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1421553

Workshop draft abstract:

For the last twenty years, users have largely maintained digital possession of their own writings. Consumers would use programs like Microsoft Word and Corel’s WordPerfect to draft letters, and programs like Microsoft Excel or Intuit’s Quicken to manage their own finances. Were the government to take an interest in a document produced by one of these PC owners, law enforcement would have to first obtain a search warrant, and then later visit the person’s home in order to seize their computer. Cloud computing has changed everything. Companies like Google, Microsoft and Adobe provide free access to fully functioning word processing, spreadsheet, presentation and image manipulation software, all through a web browser. End-users can collaborate with others, access their own files from any computer around the world, and not have to worry about the problems of data loss or backups — as the files are automatically backed up, and stored “in the cloud.” While this shift to cloud computing (and in particular, “software as a service”) has brought significant benefits to consumers, it has also come with a hidden cost — their privacy, and the evisceration of traditional Fourth Amendment protections. Because users no longer hold the only copy of their files, law enforcement agents are no longer required to seek a warrant in order to obtain those personal documents. Now, thanks to the third party doctrine, law enforcement can use turn to a subpoena to force Microsoft, Google and the other service providers to turn over user’s private files.

This raises a number of significant privacy issues, such as the far lower evidentiary threshold required for a subpoena, the fact that the service providers often have little to no incentive to fight the request as well as the lack of notification provided to the end user.

Furthermore, this shift provides both law enforcement and intelligence agencies with significant economies of scale in surveillance — that is, instead of obtaining and serving individual warrants on hundreds (or thousands) of users, they can now go to a handful of service providers to obtain that same private information.

This article will examine these an other privacy issues related to cloud computing. First, it will trace the legal history of the third party doctrine, and explore its impact upon cloud based services. It will also explore key cases in which law enforcement agencies were able to force technology companies to modify their products in order to better surveill end-users.

Moving on, it will explore the development and widespread adoption of key cloud computing services. It will highlight some likely future trends which may impact users’ expectation of privacy, including the placement of cloud-based product icons on the desktops of new computers and the development of single-site browsers which may make it difficult for naive users to be aware that they are using an Internet-based product. The article will then trace out a series of “what ifs” to explore potential future pro-privacy developments in cloud computing, such as the local encryption of user’s documents before storing them online, and highlight how even these efforts could be frustrated by law enforcement. Finally, it will conclude with a set of policy and technology recommendations that could help to tip the privacy scales back towards the end-user.

Thomas Smedinghoff, Federated Identity Management – Balancing Privacy Rights, Liability Risks, and the Duty to Authenticate

Thomas Smedinghoff, Federated Identity Management – Balancing Privacy Rights, Liability Risks, and the Duty to Authenticate

Comment by: Gerry Stegmaier

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1471599

Workshop draft abstract:

Because identity management typically (but not always) requires the disclosure, verification, storage, and communication of personal information, the paper will focus on the impact of the legal issues surrounding identity management systems on the privacy of the individuals involved.  In particular, it will:

* Explain the basic principles that underlie the concept of commercial identity management (including in particular, the developing notion of federated identity management);

* Identify the numerous legal issues raised by the use of identity management (particularly federated systems);

* Focus on the privacy implications of the collection, verification, storage, communication, and disclosure of personal information in the context of identity management systems;

* Examine the role of identity management in addressing the legal and risk-based obligations to authenticate remote parties to on-line transactions; and

* Evaluate the legal requirements applicable to all identity management systems, and how the operation of those systems raise and might address issues of concern relating to the privacy and security of personal information.

Paul Schwartz, The Constitutional Right to Confidential and Secure Information Systems: German and American Telecommunications Privacy in Comparison

Paul Schwartz, The Constitutional Right to Confidential and Secure Information Systems: German and American Telecommunications Privacy in Comparison

Comment by: Mark Eckenwiler

PLSC 2009

Workshop draft abstract:

In 2008, the German Constitutional Court declared a new constitutional right that protected the confidentiality and security of information systems.  According to the German High Court, this constitutional interest protects the individual against certain kinds of searches of her personal computer, cell phone, or electronic calendar.  To protect this right, the Court required the creation of suitable procedures by the legislature.   In my presentation, I discuss a broad series of contemporary German legal developments that respond to new kinds of online searches and telecommunications surveillance as well as the post-9/11 policy landscape.  The presentation will draw comparisons with the legal response in the United States.

Ira Rubinstein, Anonymity Reconsidered

Ira Rubinstein, Anonymity Reconsidered

Comment by: Mary Culnan

PLSC 2009

Workshop draft abstract:

According to the famous New Yorker cartoon, “On the Internet, nobody knows you’re a dog.”  Today-about 15 years later-this caption is less apt; if “they” don’t know who you are they at least know what brand of dog food you prefer and who you run with.  Internet anonymity remains very problematic.  On the one hand, many privacy experts would say that anonymity is defunct, citing as evidence the increasing use of the Internet for data mining and surveillance purposes.  On the other, a wide range of commentators are equally troubled by the growing lack of trust on the Internet and many view as a leading cause of this problem the absence of a native “identity layer”-i.e., a reliable way of identifying the individuals with whom we communicate and the Web sites to which we connect.  While the need for stronger security and better online identity mechanisms grows more apparent, the design and implementation of identity systems inevitably raises longstanding concerns over the loss of privacy and civil liberties. Furthermore, with both beneficial and harmful uses, the social value of Internet anonymity remains highly contested.  For many, this tension between anonymity and identity seems irresolvable, leading to vague calls for balancing solutions or for simply preserving the status quo because proposed changes would only make matters worse.  This paper offers a fresh look at some of the underlying assumptions of the identity-anonymity standoff by re-examining the meaning of anonymity and raising questions about three related claims: 1) anonymity is the default in cyberspace; 2) anonymity is essential to protecting online privacy; and, 3) the First Amendment confers a right of anonymity.  Based on the results of this analysis, the paper concludes by critically evaluating a recently issued CSIS report entitled “Securing Cybersecurity for the 44th Presidency,” which includes 7 major recommendations, one of which is that the government require strong authentication for access to critical infrastructure.

Marcy Peek, The Observer and the Observed: Re-imagining Privacy Dichotomies in Information Privacy Law

Marcy Peek, The Observer and the Observed: Re-imagining Privacy Dichotomies in Information Privacy Law

Comment by: Colin Koopman

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1492231

Workshop draft abstract:

Information privacy law and scholarship has drawn a false dichotomy between those who violate privacy (the “observers” or “watchers”) and those who have their privacy violated (the “observed” or the “watched.”)  From the Orwellian concept of Big Brother (in the book 1984) in which everyone is watched at virtually all times, theoretical conceptions of privacy moved to the Foucault-ian notion of the Panopticon, as expressed concretely in the early nineteenth century by Jeremy Bentham.   His concept of the Panopticon was of a prison, circular in architectural design, that had at its center at a watch guard tower rising above the circular prison.   The key aspect of the Panopticon was the central watch guard tower, which was designed with blackened windows that allowed the guards to see out, but disallowed the prisoners to see in.  Thus, the prisoners could not know whether or when the guards were watching them at any given moment.  This created a situation of perfect surveillance and perfect control, for the prisoners had no idea at any given time whether the guards were watching or even whether the guards were in the tower at all.   In fact, no one had to be in the tower at any given time — for the prisoners knew that they might be surveilled — or not — at any moment of the day or night.    These Orwellian, Foucault-ian, and Bentham-ian notions of privacy centered around the dichotomous concept of the observer vs. the observed, or the watcher vs. the watched.    None of these notions — which are fundamentally notions of surveillance — take into account — the more fluid concept of the observed and the observer mutually engaging in observation or — to put it another way — both parties (whether consensually or not) watching each other.   Information privacy law generally assumes that a person usually wants privacy and that there is a watcher-watched relationship in which a watcher invades a person’s privacy (legally or not).   But that assumption is driven by the false dichotomy between the observed and the observer and an erroneous assumption that the observed generally desires privacy vis-à-vis the observer.  Once we push at the borders of these assumptions, we begin to understand that privacy relations are more nuanced than often portrayed by information privacy law and scholarship.   For example, as technology progresses and examples such as webcams (one-way or two-way), reality shows, long-range imaging devices, video-enhanced cell phones, easily accessible personal information via Internet databases or social networking sites, etc. become commonplace, the meanings of privacy are altered, and we all take on multiple, shifting roles of the watcher and the watched at various times.    In effect, we are all watching each other.   This new paradigm has a myriad of implications for conceptions of privacy.   For example, if one value of privacy is self-development, and the concept of self-development in a less complicated environment of relatively stable observed/observer relations is no longer the norm, then self-development becomes less about privacy and more about constructing identity and the presentation of self in everyday life (see, e.g., Erving Goffman’s works).  Indeed, as technology progresses, we all end up in the roles of the watcher and the watched, whether simultaneously or at distinct points in time.    As quantum physics teaches us, the knowledge that observation is taking places changes the behavior of the observed; because observation is ubiquitous in the modern, technological world, our conceptions of normative values such as self-development, reasonable expectations of privacy, the privacy torts, and privacy mandates embodied in federal and state law might need to be re-imagined.