Archives

Alan Rubel and Ryan Biava, A Framework for Comparing Privacy States

Alan Rubel and Ryan Biava, A Framework for Comparing Privacy States

Comment by: Judith DeCew

PLSC 2013

Workshop draft abstract:

This paper develops a framework for analyzing and comparing privacy and privacy protections across (inter alia) time, place, and polity and for examining factors that affect privacy and privacy protection. This framework provides a way to describe precisely aspects of privacy and context and a flexible vocabulary and notation for such descriptions and comparisons. Moreover, it links philosophical and conceptual work on privacy to social science and policy work and accommodates different conceptions of the nature and value of privacy. The paper begins with an outline of the framework. It then refines the view by describing a hypothetical application. Finally, it applies the framework to a real-world privacy issue—campaign finance disclosure laws in the U.S. and in France. The paper concludes with an argument that the framework offers important advantages to privacy scholarship and for privacy policy makers.

Buzz Scherr, Genetic Privacy and Police Practices

Buzz Scherr, Genetic Privacy and Police Practices

Comment by: Paul Frisch

PLSC 2013

Workshop draft abstract:

Genetic privacy and police practices have come to the fore in the criminal justice system.  Case law and stories in the media document that police are surreptitiously harvesting the out-of-body DNA of putative suspects. Some sources even indicate that surreptitious data banking may also be in its infancy. Surreptitious harvesting of out-of-body DNA by the police is currently unregulated by the Fourth Amendment.  The few courts that have addressed the issue find that the police are free to harvest DNA abandoned by a putative suspect in a public place. Little in the nascent surreptitious harvesting case law suggests that surreptitious data banking would be regulated either under current judicial conceptions of the Fourth Amendment.

The surreptitious harvesting courts have misapplied the Katz reasonable-expectation-of-privacy test recently reaffirmed in U.S. v. Jones by the Supreme Court.  They have taken a mistakenly narrow property-based approach to their analyses.  Given the potential for future abuse of the freedom to collect anyone’s out-of-body DNA without even a hunch, this article proposes that the police do not need a search warrant or probable cause to seize an abandoned item in or on which cells and DNA exist.  But, they do need a search warrant supported by probable cause to enter the cell and harvest the DNA.

An interdisciplinary perspective on the physical, informational and dignitary dimensions of genetic privacy suggests that an expectation of privacy exists in the kaleidoscope of identity that is in out-of-body DNA. Using linguistic theory on the use of metaphors, the article also examines the use of DNA metaphors in popular culture as a reference point to explain a number of features of core identity in contrast to the superficiality of fingerprint metaphors.  Popular culture’s frequent uses of DNA as a reference point reverberate in a way that suggests that society does recognize as reasonable an expectation of privacy in DNA.

Marc Blitz, The Law and Political Theory of “Privacy Substitutes”

Marc Blitz, The Law and Political Theory of “Privacy Substitutes”

Comment by: Ian Kerr

PLSC 2013

Workshop draft abstract:

The article explores the question of when the government officials should in some cases be permitted to take measures that lessen individuals’ informational privacy – on the condition that they in some sense compensate for it “in kind” – either by (i) recreating this privacy in a different form or (ii) providing individuals with some other kind of legal protection which assures, for example, the information disclosed by the government will not be used to impose other kinds of harm.

My aim in the article is to make three points.  First, I explore the ways in which the concept of a privacy substitute already plays a role in at least two areas of Fourth Amendment law:

  1. The case law on “special needs” and administrative searches, which discusses when “constitutionally adequate substitute[s]” for a warrant (to use the language of New York v. Burger (1987)) or statutory privacy protections (such as those in the DNA act), may compensate for the absence of warrant- or other privacy safeguards and
  2. cases holding that certain technologies which allow individuals to gather information from a private environment (such as a closed container) might be deemed “non-searches” if the technologies have built-in limitations assuring that they do not gather information beyond that information about the presence of contraband material or other information in which there is no “reasonable expectation of privacy” under the Fourth Amendment.

In each of these cases, I argue, courts have relied on certain assumptions – some of them problematic – about when certain kinds of statutory, administrative, or technological privacy protections may be substituted for more familiar constitutional privacy protections such as warrant requirements.

Second, I argue that, while such cases have sometimes set the bar too low for government searches, “privacy substitutes” of this sort can and should play a role in Fourth Amendment jurisprudence, and also perhaps in First Amendment law on anonymous speech and other constitutional privacy protections.  In fact, I will argue, there are situations where technological developments may make such “privacy substitutions” not merely helpful to saving certain government measures from invalidation, but essential for replacing certain kinds of privacy safeguards that would otherwise fall victim to technological changes (such as advances in location tracking and video surveillance technology which undermine the features of the public environment individuals could previously rely upon to find privacy in public settings).

Third, focusing on the example of protections for anonymous speech in First Amendment law, I explore under what circumstances government should, in some cases, be permitted to replace privacy protections not with new kinds of privacy protection, but rather with other legal measures that serve the same end — for example, measures that provide the liberty, or sanctuary from retaliation, that privacy is sometimes relied upon for.

Lauren E. Willis, Why Not Privacy by Default?

Lauren E. Willis, Why Not Privacy by Default?

Comment by: Michael Geist

PLSC 2013

Workshop draft abstract:

We live in a Track-Me world.   Firms collect reams of personal data about all of us, for marketing, pricing, and other purposes.  Most people do not like this.  Policymakers have proposed that people be given choices about whether, by whom, and for what purposes their personal information is collected and used.  Firms claim that consumers already can opt out of the Track-Me default, but that choice turns out to be illusory.  Consumers who attempt to exercise this choice find their efforts stymied by the limited range of options firms actually give them and technology that bypasses consumer attempts at self-determination.  Even if firms were to provide consumers with the option to opt out of tracking completely and to respect that choice, opting out would likely remain so cumbersome as to be impossible for the average consumer.

In response, some have suggested switching the default rule, such that firms (or some firms) would not be permitted to collect (or collect in some manners) and/or use (or use for some purposes) personal data (or some types of personal data) unless consumers opt out of a “Do-Not-Track” default.  Faced with this penalty default, firms ostensibly would be forced to clearly explain to consumers how to opt out of the default and to justify to consumers why they should opt into a Track-Me position.  Consumers could then, the reasoning goes, freely exercise informed choice in selecting whether to be tracked.

Industry vigorously opposes a Do-Not-Track default, arguing that Track-Me is the better position for most consumers and that the positive externalities created by tracking justify keeping that as the default, if not unwaivable, position.  Some privacy advocates oppose both Track-Me and Do-Not-Track defaults on the grounds that the negative externalities created by tracking justify refusing to allow any consumers to consent to tracking at all.

Here I caution against the use of a Do-Not-Track default on different grounds.  Lessons from the experience of consumer-protective defaults in other realms counsel that a Do-Not-Track default is likely to be slippery.  The very same transaction barriers and decisionmaking biases that can lead consumers to stick with defaults in some situations can be manipulated by firms to induce consumers to opt out of a Do-Not-Track default.  Rather than forcing firms to clearly inform consumers of their options and allowing consumers to exercise informed choice, a Do-Not-Track default will provide firms with opportunities to confuse many consumers into opting out.  Once a consumer opts out of a default position, courts, commentators, and the consumer herself are more likely to blame the consumer for any adverse consequences that might befall her.  The few sophisticated consumers who are able to effectively control whether they are tracked will benefit, but at the expense of the majority who will lack effective self-determination in this realm.  A Do-Not-Track default might be a necessary policy way station en route to a scheme of privacy-protective mandates for political reasons, but it also might defuse the political will to implement such a scheme without meaningfully changing the lack of choice inherent in today’s Track-Me world.

I use “track” to mean all forms of personal data collection and use beyond those that are reasonably expected for the immediate transaction at hand.  So, for example, a consumer who provides her address to her bank expects it to be used for the purposes of mailing her information about her accounts, but does not expect it to be used to decide whether or at what price to offer her auto insurance.

Steven M. Bellovin, Renée M. Hutchins, Tony Jebara, Sebastian Zimmeck, When Enough is Enough: Location Tracking, Mosaic Theory and Machine Learning

Steven M. Bellovin, Renée M. Hutchins, Tony Jebara, Sebastian Zimmeck, When Enough is Enough: Location Tracking, Mosaic Theory and Machine Learning

Comment by: Orin Kerr

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2320019

Workshop draft abstract:

Since 1967, the Supreme Court has tied our right to be free of unwanted government scrutiny to the concept of reasonable expectations of privacy.5 Reasonable expectations include, among other things, an assessment of the intrusiveness of government action. When making such assessments historically, the Court has considered police conduct with clear temporal, geographic, or substantive limits. However, in an era where new technologies permit the storage and compilation of vast amounts of personal data, things are becoming more complicated. A school of thought known as “Mosaic Theory” has stepped into the void, ringing the alarm that our old tools for assessing the intrusiveness of government conduct potentially undervalue our privacy rights.

Mosaic theorists advocate a cumulative approach to the evaluation of data collection.

Under the theory, searches are “analyzed as a collective sequence of steps rather than as individual steps.”6 The approach is based on the recognition that comprehensive aggregation of even seemingly innocuous data reveals greater insight than consideration of each piece of information in isolation. Over time, discrete units of surveillance data can be processed to create a mosaic of habits, relationships, and much more. Consequently, a Fourth Amendment analysis that focuses only on the government’s collection of discrete units of trivial data fails to appreciate the true harm of long-term surveillance—the composite.

In the context of location tracking, the Court has previously suggested that the Fourth Amendment may (at some theoretical threshold) be concerned with the accumulated information revealed by surveillance.7 Similarly, in the Court’s recent decision in United States v. Jones, a majority of concurring justices indicated willingness to explore such an approach. However, in the main, the Court has rejected any notion that technological enhancement matters to the constitutional treatment of location tracking.8 Rather the Court has found that such surveillance in public spaces, which does not require physical trespass, is equivalent to a human tail and thus not regulated by the Fourth Amendment. In this way, the Court has avoided quantitative analysis of the amendment’s protections.

The Court’s reticence is built on the enticingly direct assertion that objectivity under the mosaic theory is impossible. This is true in large part because there has been no rationale yet offered to objectively distinguish relatively short-term monitoring from its counterpart of greater duration. As Justice Scalia writing for the majority in United States v. Jones, recently observed: “it remains unexplained why a 4-week investigation is ‘surely’ too long.”9 This article answers that question for the first time by combining the lessons of machine learning with mosaic theory and applying the pairing to the Fourth Amendment.

Machine learning is the branch of computer science concerning systems that can draw inferences from collections of data, generally by means of mathematical algorithms. In a recent competition called “The Nokia Mobile Data Challenge,”10 researchers evaluated machine learning’s applicability to GPS and mobile phone data. From a user’s location history alone, the researchers were able to estimate the user’s gender, marital status, occupation and age.11

Algorithms developed for the competition were also able to predict a user’s likely future position by observing past location history. Indeed, a user’s future location could even be inferred with a relative degree of accuracy using the location data of friends and social contacts.12

Machine learning of the sort on display during the Nokia Challenge seeks to harness with artificial intelligence the data deluge of today’s information society by efficiently organizing data, finding statistical regularities and other patterns in it, and making predictions therefrom. It deduces information—including information that has no obvious linkage to the input data—that may otherwise have remained private due to the natural limitations of manual and human-driven investigation. Analysts have also begun to “train” machine learning programs using one dataset to find similar characteristics in new datasets. When applied to the digital “bread crumbs” of data generated by people, machine learning algorithms can make targeted personal predictions. The greater the number of data points evaluated the greater the accuracy of the algorithm’s results.

As this article explains, technology giveth and technology taketh away. The objective understanding of data compilation that is revealed by machine learning provides important Fourth Amendment insights. We should begin to consider these insights more closely.

In four parts, this article advances the conclusion that the duration of investigations is relevant to their substantive Fourth Amendment treatment because duration affects the accuracy of the generated composite. Though it was previously difficult to explain why an investigation of four weeks was substantively different from an investigation of four hours, we now can. As machine learning algorithms reveal, composites (and predictions) of startling accuracy can be generated with remarkably few data points. Furthermore, in some situations accuracy can increase dramatically above certain thresholds. For example, a 2012 study found the ability to deduce ethnicity improved slowly through five weeks of phone data monitoring, jumped sharply to a new plateau at that point, and then increased sharply again after twenty-eight weeks. More remarkably, the accuracy of identification of a target’s significant other improved dramatically after five days’ worth of data inputs.14 Experiments like these support the notion of a threshold, a point at which it makes sense to draw a line.

The results of machine learning algorithms can be combined with quantitative privacy definitions. For example, when viewed through the lens of k-anonymity, we now have an objective basis for distinguishing between law enforcement activities of differing duration. While reasonable minds may dispute the appropriate value of k or may differ regarding the most suitable minimum accuracy threshold, this article makes the case that the collection of data points allowing composites or predictions that exceed selected thresholds should be deemed unreasonable searches in the absence of a warrant.15 Moreover, any new rules should take into account not only the data being collected but also the foreseeable improvement in the machine learning technology that will ultimately be brought to bear on it; this includes using future algorithms on older data.

In 2001, the Supreme Court asked “what limits there are upon the power of technology to shrink the realm of guaranteed privacy.”16 In this piece, we explore what lessons there are in the power of technology to protect the realm of guaranteed privacy. The time has come for the Fourth Amendment to embrace what technology already tells us—a four-week investigation is surely too long because the amount of data collected during such an investigation creates a highly intrusive view of person that, without a warrant, fails to comport with our constitutional limits on government.

                                                                                                                                                                                                    

1  Professor, Columbia University, Department of Computer Science.

2  Associate Professor, University of Maryland Carey School of Law.

3  Associate Professor, Columbia University, Department of Computer Science.

4 Ph.D. candidate, Columbia University, Department of Computer Science.

5  Katz v. United States, 389 U.S. 347, 361 (1967) (Harlan, J., concurring).

6  Orin Kerr, The Mosaic Theory of the Fourth Amendment, 111 Mich. L. Rev. 311, 312 (2012).

7 United States v. Knotts, 460 U.S. 276, 284 (1983).

8  Compare Knotts, 460 U.S. at 276 (rejecting the contention that an electronic beeper should be treated differently than a human tail) and Smith v. Maryland, 442 U.S. 735, 744 (1979) (approving the warrantless use of a pen register in part because the justices were “not inclined to hold that a different constitutional result is required because the telephone company has decided to automate.”) with Kyllo v. United States, 533 U.S. 27, 33 (2001) (recognizing that advances in technology affect the degree of privacy secured by the Fourth Amendment).

9  United States v. Jones, 132 S.Ct. 945 (2012); see also Kerr, 111 Mich. L. Rev. at 329-330.

10  See http://research.nokia.com/page/12340.

11  Demographic Attributes Prediction on the Real-World Mobile Data, Sanja Brdar, Dubravko Culibrk, and Vladimir

Crnojevic, Nokia Mobile Data Challenge Workshop 2012.

12  Interdependence and Predictability of Human Mobility and Social Interactions, Manlio de Domenico, Antonio

Lima, and Mirco Musolesi, Nokia Mobile Data Challenge Workshop 2012.

14  See, e.g., Yaniv Altshuler, Nadav Aharony, Michael Fire, Yuval Elovici, Alex Pentland, Incremental Learning with Accuracy Prediction of Social and Individual Properties from Mobile-Phone Data, WS3P, IEEE Social Computing (2012), especially Figures 9 and 10.

15 Admittedly, there are differing views on sources of authority beyond the Constitution that might justify location tracking. See, e.g., Stephanie K. Pell and Christopher Soghoian, Can You See Me Now? Toward Reasonable Standards for Law Enforcement Access to Location Data That Congress Could Enact, 27 Berkeley Tech. L.J. 117 (2012).

16  Kyllo, 533 U.S. at 34.

Allyson Haynes Stuart, Search Results – Buried But Not Forgotten

Allyson Haynes Stuart, Search Results – Buried But Not Forgotten

Comment by: Paul Bernal

PLSC 2013

Workshop draft abstract:

The “right to be forgotten” has gotten a lot of attention lately, primarily because of its potential to chill online speech.  At the same time, there is a rise in US cases seeking deletion of online information.  The problem that gives rise to the EU’s right to be forgotten is only increasing – the conflict between the self-image people want to present and the one that is presented on the internet.

The primary problem in imagining the application of a right to be forgotten in the United States is the vastly different legal background between it and the EU.  In the US, information posted online is, for the most part, considered “speech” – and the First Amendment strongly protects such speech from any limitation, be it a restriction on what may be posted or a requirement that existing information be taken down.  The internet is likened to one huge street corner, and anyone with access is welcome to post at will on his or her virtual soap box.  The imprimatur of speech gives online content the golden halo of first amendment protection that has gotten only more robust in recent years.  In contrast, the EU interprets the online posting of information as the processing of “data” which is owned by the individual data subject.  Under the Data Protection Directive, such processing is subject to a host of restrictions.  So under a system where an entity needs a purpose to gather personal information and may use it only for the duration of that purpose, it is not far-fetched to imagine a requirement that certain information be deleted under circumstances including when the data is no longer necessary for the original purpose.

So to determine whether there is hope for any such right in the US, we need to think in US terms – freedom of speech and its (few) limitations, rather than data rights and the processing of subjects’ personal information.  Nonetheless, there are some ways in which our jurisprudence may be interpreted as supporting some rights to restrict the posting and continued publication of certain online content.

This article addresses the problem by using the fact that the average person does not find fault with information continuing to be located on certain websites were it not for the ease with which that information is discoverable via searching.  So the true problem that most people have with sites’ refusal to “take down” certain information is that it shows up in response to searches – primarily Google searches.  So I approach a right to deletion online by concentrating on the role of search engines in keeping alive information others would prefer to become “practically obscure.”

This article proposes a compromise whereby a notice-and-take down system similar to that for copyright violations would allow individuals to request that search engines cease to prominently place certain information in their “results” on the basis of one or more of the following reasons:  (1) the information is no longer “newsworthy” based on its age, taking into account in particular events or information concerning a youth that are less relevant years later; (2) the information borders on defamation or false light publication based on subsequent events, such as the acquittal of a personal charged with a crime, or a finding of no liability of an entity sued for tortuous misconduct; (3) the information is unduly harmful, such as that resulting in bullying or stalking; (4) the information is untrue or defamatory; or (5) any other reason for which the continued high placement of the information subjects a person or entity to unfair prejudice.

The fact is that Google is already responding to requests to take down information outside the intellectual property realm.  But it is responding to those requests in an opaque manner based on its own internal views of what requests are proper or not.  There should be guidelines for those decisions so that they are not based on bias, identity of the requester, or happenstance.

The benefits of this proposal include the fact that, because it applies suggested guidelines, it avoids the constitutional problem of requiring deletion.  The proposal is less logistically difficult to implement than requiring removal of information from all websites, because it only guides the action that Google already takes in response to take-down requests.  Finally, while the proposal falls well short of requiring erasure like the EU’s proposal of a right to be forgotten, it addresses the primary concern of most people who seek such deletion – decreasing the prominence of such information in response to a search request.

Amy Gajda, The First Amendment Bubble: Legal Limits on News and Information in an Age of Over-Exposure

Amy Gajda, The First Amendment Bubble:  Legal Limits on News and Information in an Age of Over-Exposure

Comment by: Samantha Barbas

PLSC 2013

Workshop draft abstract:

In Fall of 2012, magazines and websites published clandestine nude photographs of Kate Middleton, Duchess of Cambridge, passages from deceased ambassador Christopher Stevens’ personal diary pilfered by CNN reporters at the scene of the ransacked consulate in Libya, and hidden camera video of wrestler Hulk Hogan engaging in graphic sexual activity with a friend’s wife.

We live today in an age of over-exposure in media, bombarded with images and information once thought inappropriate for public consumption, much of it self-published.  The feed of internet postings and other publications have combined with significant changes in media practices to fuel a sense that, when it comes to public discourse, anything goes, and that media is only too happy to facilitate.

These changes are undermining the constitutional sensibility that has protected press rights and access to information for the better part of the last century.  That sensibility recognized that privacy interests came second to the public interest in newsworthy truthful information, and it trusted journalists to regulate themselves in deciding what qualified as news.  Today, in an environment in which journalists and quasi-journalists seem ever less inclined to restrain themselves in indulging the public appetite for information, however scandalous or titillating, that bargain seems increasingly naïve.  And courts are beginning to show new muscle in protecting persons from media invasions by imposing their own sense of the proper boundaries of news and other truthful public disclosures.  The First Amendment bubble, enlarged by an expanding universe of claims to protection by traditional media, internet ventures, and “citizen journalists,” could burst.

Alessandro Acquisti, Laura Brandimarte, and Jeff Hancock, Are There Evolutionary Roots To Privacy Concerns?

Alessandro Acquisti, Laura Brandimarte, and Jeff Hancock, Are There Evolutionary Roots To Privacy Concerns?

Comment by: Dawn Schrader

PLSC 2013

Workshop draft abstract:

We present a series of experiments aimed at investigating potential evolutionary roots of privacy concerns.

Numerous factors determine our different reactions to offline and online threats. An act that appears inappropriate in one context (watching somebody undressing in their bedroom) is natural in another (on the beach); the physical threat of a stranger following us in the street is more ominous than the worst consequences of an advertiser knowing what we do online; common sense and social conventions tell us that genuine Rolexes are not sold at street corners – but fake Bank of America websites are found at what seem like the right URLs. There is, however, one crucial parallel that connects the scenarios we just described: our responses to threats in the physical world are sensitive to stimuli which we have evolved to recognize as signals of danger. Those signals are absent, subdued, or manipulated, in cyberspace. The “evolutionary” conjecture we posit and experimentally investigate is that privacy (as well as security) decision making in cyberspace may be inherently more difficult than privacy and security decision making in the physical world, because – among other reasons – online we lack, or are less exposed to, the stimuli we have evolved to employ offline as means of detection of potential threats.

Through a series of lab experiments, we are investigating this conjecture indirectly, by measuring the impact that the presence, absence, or changes to an array of stimuli in the physical world (which are mostly unconsciously processed) will have on security and privacy behavior in cyberspace.

 

Our approach focuses on the mostly unconsciously processed stimuli that influence security and privacy behavior in the offline world, and is posited on an evolutionary conjecture: Human beings have evolved sensorial systems selected to detect and recognize threats in their environment via physical, “external” stimuli. These stimuli, or cues, often carry information about the presence of others in one’s space or territory. The evolutionary advantages of being able to process and react to such stimuli are clear: by using these signals to assess threats in their physical proximity, humans reduce the chance of being preyed upon (Darwin, 1859; Schaller, Faulkner, Park, Neuberg & Kenrick, 2005). Under this conjecture, the modern, pre-information age notion of privacy may be an evolutionary by-product of the search for security. Such evolutionary explanation for privacy concerns may help explain why – despite the wide and diverse array of privacy attitudes and behaviors across time and geography – evidence of a desire for privacy, broadly constructed, can be found across most cultures. Furthermore, since in cyberspace, those signals are absent, subdued, or manipulated, generating an evolutionary “deficit,” such an evolutionary story may explain why privacy concerns that would normally be activated in the offline world are suppressed online, and defense behaviors are hampered.

The research we are conducting, therefore, combines lessons from disciplines that have been recently applied to privacy and security (such as usability, economics, or behavioral decision research) with lessons and methodologies from evolutionary psychology (Buss, 1991, 1995). While this gendered, evolutionary perspective is not without criticism, it can explain several patterns in online dating behavior. Women, for example, are more likely to include dated and otherwise deceptive photos in their profile than men (Hancock & Toma, 2009). Physical attractiveness also plays a role, with attractive daters lying less in their profiles and judging those who do lie more harshly than unattractive daters (Toma & Hancock, 2010). Indeed, extant cyber-research has been criticized for ignoring the evolutionary pressures that may shape online behaviors (see Kock, 2004), such as humans’ ability to cognitively adapt to new media, and their evolutionary preferences for certain media characteristics (e.g., synchronicity, collocation).

While we cannot directly test the evolutionary conjecture that the absence of stimuli, which humans have evolved to detect for assessing threats (including cues to the presence of other humans), contributes to our propensity to fall for cyberattacks or online privacy violations, we can test, through a series of human subjects experiments we have started piloting, how the presence, absence, or modifications in an array of stimuli in the physical world affect security and privacy behavior in cyberspace. The term “stimuli,” in the parlance of this proposal, is akin to the term “cues” as used in psychology and cognitive science. Our experiments focus on three types of such stimuli:

S1)    sensorial stimuli: auditory, visual, olfactory cues of the physical proximity of other human beings;
S2)    environmental stimuli: cues that signal to an individual certain characteristics of the physical environment in which the individual is located, such as crowdedness or familiarity;
S3)    observability stimuli: cues that signal whether the individual is possibly being surveilled.

The three categories are not meant as mutually exclusive (for instance, it is through our senses that we receive cues about the environment). Our experiments capture how manipulations of the stimuli in the physical environment of the subject influence both her privacy behavior in cyberspace. Privacy behavior is operationalized in terms of individuals’ propensity to disclose personal or sensitive information, as in previous experiments by the authors.

Kenneth Bamberger and Deirdre Mulligan, Privacy in Europe: Initial Data on Governance Choices and Corporate Practices

Kenneth Bamberger and Deirdre Mulligan, Privacy in Europe: Initial Data on Governance Choices and Corporate Practices

Comment by: Dennis Hirsch

PLSC 2013

Workshop draft abstract: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2328877

Privacy governance is at a crossroads.  In light of the digital explosion, policymakers in North America and Europe are revisiting regulation of the corporate treatment of information privacy.  The recent celebration of the thirtieth anniversary of the Organization for Economic Cooperation and Development’s (“OECD”) Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,[1] the first international statement of fair information practice principles, sparked an international review of the guidelines to identify areas for revision.  Work by national data privacy regulators reviewing the E.U. Data Protection Directive in turn have suggested alternative regulatory models oriented around outcomes.[2]  The European Commission is actively debating the terms of a new Privacy Regulation.[3]  And Congress, the FTC, and the current U.S. presidential administration have signaled a commitment to deep reexamination of the current regulatory structure, and a desire for new models.[4]

These efforts, however, have lacked critical information necessary for reform.  Scholarship and advocacy around privacy regulation has focused almost entirely on law “on the books”—legal texts enacted by legislatures or promulgated by agencies.  By contrast, the debate has strangely ignored privacy “on the ground” – the ways in which corporations in different countries have operationalized privacy protection in the light of divergent formal laws; interpretive, organizational and enforcement decisions made by local administrative agencies; and other jurisdiction-specific social, cultural and legal forces.

Since 1994, when such a study examined the U.S. privacy landscape,[5] no sustained inquiry has been conducted into how corporations actually manage privacy in the shadow of formal legal mandates.  No one, moreover, has ever engaged in such a comparative inquiry across jurisdictions.  Indeed, despite wide international variation in approach, even the last detailed comparative account of enforcement practices occurred over two decades ago.[6]  Thus policy reform efforts progress largely without a real understanding of the ways in which previous regulatory attempts have actually promoted, or thwarted, privacy’s protection.

This article is the third documenting a project intended to fill this gap – and at a critical juncture.  The project uses qualitative empirical inquiry—including interviews with and surveys of corporate privacy officers, regulators, and other actors within the privacy field—to identify the ways in which privacy protection is implemented on the ground, and the combination of social, market, and regulatory forces that drive these choices.  And it offers a comparative analysis of the effects of different regulatory approaches adopted by a diversity of OECD nations, taking advantage of the living laboratory created by variations in national implementation of data protection, an environment that can support comparative, in-the-wild assessments of their ongoing efficacy and appropriateness.

While the first two articles in this series discussed research documenting the implementation of privacy in the United States,[7] this article presents the first analysis of data of its kind from Europe, reflecting research and interviews in three EU jurisdictions: Germany, Spain, and France.

The article reflects only the first take at this recently-gathered data; the analysis is not comprehensive, and the lessons drawn at this stage are necessarily tentative.  A complete consideration of the research on the privacy experience in five countries (the US, Germany, France, Spain, and the UK) – one which more generally draws lessons for broader research on paradigms for thinking about privacy, the effectiveness of corporate practices informed by those paradigms, and organizational compliance with different forms of regulation and other external norms more generally – will appear in an upcoming book-length treatment.[8]

Yet this article offers as-yet unavailable data about the European privacy landscape at a critical juncture – the moment at which the policymakers are engaged in important decisions about which regulatory structures to expand to all EU member states, and which to leave behind; and about how those individual states will structure the administrative agencies governing data privacy moving forward; and about strategies those agencies will adopt regarding legal enforcement, the development of expertise within both the government and firms, and the ways that other participants within the privacy “field”[9]—the constellation of organizational actors participating in the construction of legal meaning in a particular domain –will (or will not) best be enlisted to shape corporate decisionmaking and ultimately privacy outcomes.

Setting the context for this analysis, Part I of this Article describes the dominant narratives regarding the regulation of privacy in the United States and the Europe Union – accounts that have occupied privacy scholarship and advocacy for over a decade. Part II summarizes our project to develop more granular accounts of the privacy landscape, and the resulting scholarship’s analyses of privacy “on the ground” in the U.S.  Informed by these analyses, Part III presents the results of our research regarding corporate perception and implementation of privacy requirements in three European jurisdictions, Germany, Spain and France, and placing them within the theoretical framework regarding emerging best practices in the U.S.  Not surprisingly for those familiar with privacy protection in the Europe, these results reveal widely varying privacy landscapes, all within the formal governance of a single legal framework: the 1995 EU Privacy Directive.   More striking, however, are the granular differences between the European jurisdictions and the similarities in both the language in which privacy is discussed, and the particular mandates and institutions shaping privacy’s governance, the architecture for privacy protection and decisionmaking between German and U.S. firms. This Part then seeks to understand the construction of the privacy “field” that shapes these differing country landscapes.  Such inquiry includes the details of national implementation of the EU directive – including the specificity and type of requirements placed on regulated parties, the content of regulation, with particular attention to the comparative focus on process-based as opposed to substantive mandates, and the use of ex ante guidance as opposed to prosecution and enforcement – as well as the structure and approach of the relevant data protection agency, including the size and organization of the staff, the level to which they rely on technical and legal “experts” inside the agency, rather than inside the companies they regulate; the use of enforcement and inspections; and the manner in which regulators and firms interact more generally.  Yet it also includes an understanding of factors beyond privacy regulation itself, including other legal mandates, elements characteristic of national corporate structure, and societal factors, such as the roles of the media and other citizen, industry, labor, or professional organizations that determine the “social license” that governs a corporation’s freedom to act.

Finally, the Article’s Part IV outlines two elements of a new account of privacy’s development, informed by comparative analysis.  First, based on the data from four jurisdictions, it engages in a preliminary analysis regarding which elements of these privacy fields our interviews and other data suggest have fostered, catalyzed and permitted the most adaptive responses in the face of novel challenges to privacy.  Second, it suggests something important about the role of professional networks in the diffusion of practices across jurisdictional lines in the face of important social and technological change.  The adaptability of distinct regulatory approaches and institutions in the face of novel challenges to privacy has never been more important.  Our comparative analysis provides novel insight into the ways that different regulatory choices have interacted with other aspects of the privacy field to shape corporate behavior, offering important insights for all participants in policy debates about the governance of privacy.


[1] See The 30th Anniversary of the OECD Privacy Guidelines, OECD, www.oecd.org/sti/privacyanniversary (last visited Jan. 22, 2013).

[2]   See, e.g., Neil Robinson et al., RAND Eur., Review of the European Data Protection Directive (2009).

[3]   See, e.g., Konrad Lischka & Christian Stöcker, Data Protection: All You Need to Know about the EU Privacy Debate, Spiegel Online (Jan. 18, 2013, 10:15 AM), http://www.spiegel.de/international/europe/the-european-union-closes-in-on-data-privacy-legislation-a-877973.html.

[4]   See, e.g., Adam Popescu, Congress Sets Sights On Fixing Privacy Rights, readwrite (Jan. 18, 2013), http://readwrite.com/2013/01/18/new-congress-privacy-agenda-unvelied; F.T.C. and White House Push for Online Privacy Laws, N.Y. Times, May 10, 2012, at B8, available at http://www.nytimes.com/2012/05/10/business/ftc-and-white-house-push-for-online-privacy-laws.html?_r=0; Fed. Trade Commission, Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers (Mar. 2012), available at http://www.ftc.gov/os/2012/03/120326privacyreport.pdf.

[5] See H. Jeff Smith, Managing Privacy: Information Technology and Corporate America (1994).

[6]   See David H. Flaherty, Protecting Privacy in Surveillance Societies: The Federal Republic of Germany, Sweden, France, Canada, & the United States (1989).

[7] See Kenneth A. Bamberger & Deirdre K. Mulligan, New Governance, Chief Privacy Officers, and the Corporate Management of Information Privacy in the United States: An Initial Inquiry, 33 Law & Pol’y 477 (2011); Kenneth A. Bamberger & Deirdre K. Mulligan, Privacy on the Books and on the Ground, 63 Stan. L. Rev. 247 (2011).

[8] Kenneth A. Bamberger & Deirdre K. Mulligan, Catalyzing Privacy: Lessons From Regulatory Choices and Corporate Decisions on Both Sides of the Atlantic (MIT Press: forthcoming 2014)

[9].     See Paul J. DiMaggio & Walter W. Powell, The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields, 48 Am. Soc. Rev. 147, 148 (1983) (defining an organizational field as “those organizations that, in the aggregate, constitute a recognized area of institutional life: key suppliers, resource and product consumers, regulatory agencies, and other organizations that produce similar services or products.”); Lauren B. Edelman, Overlapping Fields and Constructed Legalities: The Endogeneity of Law, in Private Equity, Corporate Governance and the Dynamics of Capital Market Regulation 55, 58 (Justin O’Brien ed., 2007) (defining a legal field as “the environment within which legal institutions and legal actors interact and in which conceptions of legality and compliance evolve”).

Joel Reidenberg, Privacy in Public

Joel Reidenberg, Privacy in Public

Comment by: Franziska Boehm

PLSC 2013

Workshop draft abstract:

The existence and contours of privacy in public are in a state of both constitutional and societal confusion.  As the concurrences in U.S. v. Jones suggested, technological capabilities and deployments undermine the meaning and value of the 4th Amendment’s third-party and ‘reasonable expectation of privacy’ doctrines.  The paper argues that the conceptual problem derives from the evolution of four stages of development in the public nature of personal information.  In the first stage, the obscurity of information in public provided protection for privacy and an expectation of privacy.  In the second stage, accessibility begins to erode the protection afforded by obscurity.  The third stage, complete transparency of information in public, erases any protection through obscurity and undercuts any privacy expectations.  Finally, the fourth stage, publicity of information, or the affirmative disclosure and dissemination of information, destroys traditional notions of protection and expectations.   At the same time, publicity without privacy protection undermines constitutional values of public safety and fair governance.   The paper argues that activity in public needs to have privacy protection framed in terms of ‘public regarding’ and ‘non-public regarding’ acts.