Archives

Paul Schwartz, The Constitutional Right to Confidential and Secure Information Systems: German and American Telecommunications Privacy in Comparison

Paul Schwartz, The Constitutional Right to Confidential and Secure Information Systems: German and American Telecommunications Privacy in Comparison

Comment by: Mark Eckenwiler

PLSC 2009

Workshop draft abstract:

In 2008, the German Constitutional Court declared a new constitutional right that protected the confidentiality and security of information systems.  According to the German High Court, this constitutional interest protects the individual against certain kinds of searches of her personal computer, cell phone, or electronic calendar.  To protect this right, the Court required the creation of suitable procedures by the legislature.   In my presentation, I discuss a broad series of contemporary German legal developments that respond to new kinds of online searches and telecommunications surveillance as well as the post-9/11 policy landscape.  The presentation will draw comparisons with the legal response in the United States.

Ira Rubinstein, Anonymity Reconsidered

Ira Rubinstein, Anonymity Reconsidered

Comment by: Mary Culnan

PLSC 2009

Workshop draft abstract:

According to the famous New Yorker cartoon, “On the Internet, nobody knows you’re a dog.”  Today-about 15 years later-this caption is less apt; if “they” don’t know who you are they at least know what brand of dog food you prefer and who you run with.  Internet anonymity remains very problematic.  On the one hand, many privacy experts would say that anonymity is defunct, citing as evidence the increasing use of the Internet for data mining and surveillance purposes.  On the other, a wide range of commentators are equally troubled by the growing lack of trust on the Internet and many view as a leading cause of this problem the absence of a native “identity layer”-i.e., a reliable way of identifying the individuals with whom we communicate and the Web sites to which we connect.  While the need for stronger security and better online identity mechanisms grows more apparent, the design and implementation of identity systems inevitably raises longstanding concerns over the loss of privacy and civil liberties. Furthermore, with both beneficial and harmful uses, the social value of Internet anonymity remains highly contested.  For many, this tension between anonymity and identity seems irresolvable, leading to vague calls for balancing solutions or for simply preserving the status quo because proposed changes would only make matters worse.  This paper offers a fresh look at some of the underlying assumptions of the identity-anonymity standoff by re-examining the meaning of anonymity and raising questions about three related claims: 1) anonymity is the default in cyberspace; 2) anonymity is essential to protecting online privacy; and, 3) the First Amendment confers a right of anonymity.  Based on the results of this analysis, the paper concludes by critically evaluating a recently issued CSIS report entitled “Securing Cybersecurity for the 44th Presidency,” which includes 7 major recommendations, one of which is that the government require strong authentication for access to critical infrastructure.

Marcy Peek, The Observer and the Observed: Re-imagining Privacy Dichotomies in Information Privacy Law

Marcy Peek, The Observer and the Observed: Re-imagining Privacy Dichotomies in Information Privacy Law

Comment by: Colin Koopman

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1492231

Workshop draft abstract:

Information privacy law and scholarship has drawn a false dichotomy between those who violate privacy (the “observers” or “watchers”) and those who have their privacy violated (the “observed” or the “watched.”)  From the Orwellian concept of Big Brother (in the book 1984) in which everyone is watched at virtually all times, theoretical conceptions of privacy moved to the Foucault-ian notion of the Panopticon, as expressed concretely in the early nineteenth century by Jeremy Bentham.   His concept of the Panopticon was of a prison, circular in architectural design, that had at its center at a watch guard tower rising above the circular prison.   The key aspect of the Panopticon was the central watch guard tower, which was designed with blackened windows that allowed the guards to see out, but disallowed the prisoners to see in.  Thus, the prisoners could not know whether or when the guards were watching them at any given moment.  This created a situation of perfect surveillance and perfect control, for the prisoners had no idea at any given time whether the guards were watching or even whether the guards were in the tower at all.   In fact, no one had to be in the tower at any given time — for the prisoners knew that they might be surveilled — or not — at any moment of the day or night.    These Orwellian, Foucault-ian, and Bentham-ian notions of privacy centered around the dichotomous concept of the observer vs. the observed, or the watcher vs. the watched.    None of these notions — which are fundamentally notions of surveillance — take into account — the more fluid concept of the observed and the observer mutually engaging in observation or — to put it another way — both parties (whether consensually or not) watching each other.   Information privacy law generally assumes that a person usually wants privacy and that there is a watcher-watched relationship in which a watcher invades a person’s privacy (legally or not).   But that assumption is driven by the false dichotomy between the observed and the observer and an erroneous assumption that the observed generally desires privacy vis-à-vis the observer.  Once we push at the borders of these assumptions, we begin to understand that privacy relations are more nuanced than often portrayed by information privacy law and scholarship.   For example, as technology progresses and examples such as webcams (one-way or two-way), reality shows, long-range imaging devices, video-enhanced cell phones, easily accessible personal information via Internet databases or social networking sites, etc. become commonplace, the meanings of privacy are altered, and we all take on multiple, shifting roles of the watcher and the watched at various times.    In effect, we are all watching each other.   This new paradigm has a myriad of implications for conceptions of privacy.   For example, if one value of privacy is self-development, and the concept of self-development in a less complicated environment of relatively stable observed/observer relations is no longer the norm, then self-development becomes less about privacy and more about constructing identity and the presentation of self in everyday life (see, e.g., Erving Goffman’s works).  Indeed, as technology progresses, we all end up in the roles of the watcher and the watched, whether simultaneously or at distinct points in time.    As quantum physics teaches us, the knowledge that observation is taking places changes the behavior of the observed; because observation is ubiquitous in the modern, technological world, our conceptions of normative values such as self-development, reasonable expectations of privacy, the privacy torts, and privacy mandates embodied in federal and state law might need to be re-imagined.

Paul Ohm, The Probability of Privacy

Paul Ohm, The Probability of Privacy

Comment by: Michael Frommkin

PLSC 2009

Workshop draft abstract:

Data collectors and aggregators defend themselves against claims that they are invading privacy by invoking a verb of relatively recent vintage—“to anonymize.” By anonymizing the data—by removing or replacing all of the names or other personal identifiers—they argue that they are negating the risk of any privacy harm. Thus, Google anonymizes data in its search query database after nine months; proxy email and web browsing services promise Internet anonymity; and network researchers trade sensitive data only after anonymizing them first.

Recently, two splashy news stories revealed that anonymization is not all it is cracked up to be. First, America Online released twenty million search queries from 650,000 users. Next, Netflix released a database containing 100 Million movie ratings from nearly 500,000 users. In both cases, the personal identifiers in the databases were anonymized, and in both cases, researchers were able to “deanonymize” or “reidentify” at least some of the people in the database.

Even before these results, Computer Scientists had begun to theorize deanonymization. According to this research, none of which has yet been rigorously imported into legal scholarship, the utility and anonymity of data are linked. The only way to anonymize a database perfectly is to strip all of the information from it; any database which is useful is also imperfectly anonymous; the more useful a database, the easier it is to reidentify the personal information in the database.

This Article takes a comprehensive look at both claims of anonymization and theories of reidentification, weaving them into law and policy. It compares online and data privacy with anonymization standards and practices in health policy, where these issues have been grappled with for decades.

The Article concludes that claims of anonymization should be viewed with great suspicion. Data is never “anonymized,” and it is better to speak of “the probability of privacy” of different practices. Finally, the Article surveys research into how to reduce the risk of reidentification, and it incorporates this research into a set of prescriptions for various data privacy laws.

Erin Murphy, Relative Doubt: Partial Match or “Familial” Searches of DNA Databases

Erin Murphy, Relative Doubt:  Partial Match or “Familial” Searches of DNA Databases

Comment by: Peter Winn

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1498807

Workshop draft abstract:

This paper sets forth an architecture for considering the relevant legal standards for familial searches.  Familial searches are searches of a DNA database, using a crime-scene sample profile, that intend to look not for a complete match but rather for partial matches.  Using principles of heritability, such partial matches may allow investigators to identify relatives of the perpetrator in cases in which the perpetrator herself is not in the database.  California recently adopted governing rules for conducting familial searches, and many states and the federal government are contemplating following suit. This article is a collaboration with Dr. Yun Song (Statistics and Computer Science) and Dr. Montgomery Slatkin (Integrative Biology), both of UC Berkeley, who have calculated a formula for determining the likely results (in terms of number of hits) for various partial match searches.  Currently, there is very little legal literature about familial searching (as it is a relatively new idea), and there is virtually no statistical work contemplating the number of profiles likely returned by various levels of searches.  Moreover, in the rush to embrace “familial searching,” legal actors overlook the probabilistic sensitivity of various approaches.  Dr. Song’s formulas provide a springboard from which to examine important legal questions, such as how close a match ought to be to justify:  brief detention (reasonable articulable suspicion); a search warrant or an arrest warrant (probable cause), or perhaps even a subpoena for an evidentiary sample (relevance).

Deirdre Mulligan & Ken Bamberger, From Privacy on the Books to Privacy on the Ground: the Evolution of a New American Metric

Deirdre Mulligan & Ken Bamberger, From Privacy on the Books to Privacy on the Ground: the Evolution of a New American Metric

Comment by: Jeff Sovern

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1568385

Workshop draft abstract:

The sufficiency of U.S. information privacy law is the subject of heated debate.  A majority of privacy scholars and advocates contend that the existing patchwork of U.S. regulation fails to ensure across-the-board conformity with the standard measure of privacy protection: Fair Information Practice Principles (FIPPS) first articulated in the early 1970s.  U.S. law, they argue, further falls far short of the EU’s omnibus privacy regime thereby failing to protect against a variety of privacy based harms.  A smaller group of scholars similarly fault the U.S. for latching onto a watered-down version of FIPPS that emphasizes the procedural requirements of notice and individual choice to the exclusion of a substantive consideration of the harms and benefits to society as a whole that result from flows of personal information, and in the process created bureaucracy in lieu of privacy protection.

These critiques’ positive claims regarding U.S. law’s departure from FIPPS are largely true.  Yet, we argue, these debates generates far more heat than light as to the question of what laws provide meaningful privacy protection.   The emphasis on measuring U.S. privacy protection by the FIPPS metric simply misses the mark, focusing on a largely procedural standard offers limited utility in guiding corporate decisionmaking to protect privacy.  It thus ignores important shifts in the conception of privacy—and therefore, perhaps, how the success of its protection should be assessed—in the United States.

This paper—the first in a series drawing on a qualitative empirical study of privacy practices in U.S. corporations—argues instead that FIPPS no longer represents either the exclusive goal of U.S. privacy policy or the sole metric appropriate for assessing privacy protection.  By contrast, this article demonstrates that U.S. information privacy policy over the last decade, as understood by both regulators and those firms implementing privacy measures through regulatory compliance, evidences a second—and very “American”—definition of informational privacy.  As demonstrated both by the institutional choices regarding privacy regulation and by qualitative data regarding corporate privacy practices, informational privacy protection in the U.S. today is rooted, not in fair notice and process, but in substantive notions of consumer expectations and consumer harm.  The corporate practices resulting from the “expectations and harm” definition of privacy, in turn, often offer the promise of far greater substantive privacy protection than any FIPPS regime could provide.

This initial effort to inquire as to how the form and oversight structure of information privacy law influences its implementation and effect illustrates the value of “holistic evaluation(s) of privacy protection systems” recommended by Charles Raab.  Looking at rights and obligations on paper is insufficient to guide policy: better privacy protection requires analysis of how law works in the wild.

Jon Mills, The New Global Press and Privacy Intrusions: The Two Edged Sword

Jon Mills, The New Global Press and Privacy Intrusions: The Two Edged Sword

Comment by: Eddan Katz

PLSC 2009

Workshop draft abstract:

The free press is a critical global value.  At the same time, the press continually intrudes on another critical global value, individual privacy.  How should these values be balanced in a complex global society?

First, what is the modern press?  “Nontraditional” reporters are publishing news everyday worldwide.  Should free press protections extend to all of these individuals?  Moreover, modern technology has given this new press a multitude of new ways to collect information and the ability to disseminate that information worldwide.

Advancing, or balancing, the values of free press and privacy requires understanding that privacy invasions that occur across borders and legal jurisdictions with inconsistent laws.  The global context is complicated and contradictory. A matrix of international and national law, treaties, state law, codes, and regulations are the background for borderless press and global intrusions.  Countries vary greatly in their treatment of privacy, especially in how they address privacy violations committed by the media.  One example of a decision affecting global media was an Argentinean court order that required Yahoo to censor its search results for the former soccer star, Diego Maradona.  Finding the court’s language to be broad, Yahoo decided to remove all search results of Maradona.   How many courts would have reached the same decision?  Some forums are more favorable to privacy and some more favorable to free press.  Understanding the nature of the modern global press and the hodge podge of global laws is a necessary predicate to articulating principles to balance these vital values.

Jacqueline Lipton, “We, the Paparazzi”: Developing a Privacy Paradigm for Digital Video

Jacqueline Lipton, “We, the Paparazzi”: Developing a Privacy Paradigm for Digital Video

Comment by: Patricia Sanchez Abril

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1367314

Workshop draft abstract:

Digital age privacy law focuses mostly on text files containing personal data.  Little attention has been paid to privacy interests in video files that may portray individuals in an unflattering or embarrassing light.  As digital video technology, including inexpensive cellphone cameras, is now becoming widespread in the hands of the public, this focus needs to shift. Once a small percentage of online content, digital video is now appearing online at an exponential rate.  This is largely due to the growth of online social networking services such as YouTube, MySpace, Flickr, and Facebook.

The sharing of video online has become a global phenomenon.  At the same time, the lack of effective privacy protection for these images has become a global problem.  Digital video poses four distinct problems for privacy arising from:  de-contextualization, dissemination, aggregation, and permanency of online video information.  While video shares some of these attributes with text-based records, this article argues that the unique qualities of video and multi-media files necessitate a place of their own in online privacy discourse.  This article both identifies a rationale for, and critiques potential approaches to, digital video privacy.  It suggests that legal regulation, without more, is unlikely to provide the solutions we need to protect privacy in digital video.  Instead, it advocates a new, more nuanced multi-modal regulatory approach consisting of a matrix of legal rules, social norms, system architecture, market forces, public education, and non-profit institutions.

Jerry Kang, Self-Analytic Privacy

Jerry Kang, Self-Analytic Privacy

Comment by: Susan Freiwald

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1729332

Workshop draft abstract:

[1]        Recent technological innovations present a new problem in the information privacy space:  the privacy of self-analytics.   By “self-analytics,” we mean the collection and processing of data by an individual about that individual in order to increase her self-knowledge for diagnostics, self-improvement, and self-awareness.   Think Google analytics, but as applied to the self and not to one’s website.  In this Article, we describe this new problem space and engage in a transdisciplinary analysis, focusing on the case study of locational traces.

[2]        In undertaking this analysis, we are mindful of what has become the standard script for privacy analyses in the law reviews-(i) identify some new threatening technology; (ii) trot out a parade-of-horribles; (iii) explain why the “market” has not already solved the problem; (iv) recommend some changes in code and law that accord with the author’s values.  This script is standard for sensible reasons, but we aim to go farther.

[3]        In particular, we make two theoretical contributions.  In addition to defining a new category of personal data called “self-analytics,” we distinguish between micro and macro definitions of privacy-the former focused on individual choice regarding or consent to personal data processing, and the latter using instead a system-wide measure of the “speed” of personal data flow.   The macro “system-speed” definition is offered to supplement, not replace, the traditional micro “individual-control” definition.  Still, this supplemental conception of information privacy has substantial consequences.  Indeed, we go so far as to suggest that the nearly exclusively micro- approach to privacy hasbeen a fundamental privacy error.

[4]        In addition to the theoretical interventions, we aim to concrete in our recommendations.  In particular, we provide the design specifications, both technical and legal, of a new intermediary called the “data vault,” which we believe is best suited to solve the privacy problem of self-analytics.   As we make this case, we hope to exhibit the values of a genuinely transdisciplinary engagement across law, engineering, computer science, and technology studies when focusing on solving a concrete problem.

Woodrow Hartzog, A Promissory Estoppel Theory for Confidential Disclosure in Online Communities

Woodrow Hartzog, A Promissory Estoppel Theory for Confidential Disclosure in Online Communities

Comment by: Allyson Haynes

PLSC 2009

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1473561

Workshop draft abstract:

Revised Abstract:  Is there any safe place to disclose personal information online? Traditional wisdom dictates individuals do not have a legitimate expectation of privacy in information posted online. Nevertheless, Internet users often disclose sensitive information.  The need for confidential disclosure is no more apparent than in online communities, particularly for community members seeking support.  Yet, traditional legal remedies for privacy violations, such as the disclosure tort and intentional infliction of emotional distress, have been generally ineffective in protecting self-disclosed information. This article proposes an alternative theory of protection and recovery for online community members based on an application of the equitable doctrine of promissory estoppel.  In order to ensure mutual accountability, community members could promise to keep other members’ information confidential through a website’s terms of use agreement. Under the third-party beneficiary doctrine or the concept of dual agency, these agreements could create a safe place to disclose information due to mutual availability of promissory estoppel.  While this remedy will not serve as a panacea for privacy harms online, it could serve to protect some of the privacy interests of online community members while also promoting speech through the promise of confidentiality.