Monthly Archives: May 2013

Allyson Haynes Stuart, Search Results – Buried But Not Forgotten

Allyson Haynes Stuart, Search Results – Buried But Not Forgotten

Comment by: Paul Bernal

PLSC 2013

Workshop draft abstract:

The “right to be forgotten” has gotten a lot of attention lately, primarily because of its potential to chill online speech.  At the same time, there is a rise in US cases seeking deletion of online information.  The problem that gives rise to the EU’s right to be forgotten is only increasing – the conflict between the self-image people want to present and the one that is presented on the internet.

The primary problem in imagining the application of a right to be forgotten in the United States is the vastly different legal background between it and the EU.  In the US, information posted online is, for the most part, considered “speech” – and the First Amendment strongly protects such speech from any limitation, be it a restriction on what may be posted or a requirement that existing information be taken down.  The internet is likened to one huge street corner, and anyone with access is welcome to post at will on his or her virtual soap box.  The imprimatur of speech gives online content the golden halo of first amendment protection that has gotten only more robust in recent years.  In contrast, the EU interprets the online posting of information as the processing of “data” which is owned by the individual data subject.  Under the Data Protection Directive, such processing is subject to a host of restrictions.  So under a system where an entity needs a purpose to gather personal information and may use it only for the duration of that purpose, it is not far-fetched to imagine a requirement that certain information be deleted under circumstances including when the data is no longer necessary for the original purpose.

So to determine whether there is hope for any such right in the US, we need to think in US terms – freedom of speech and its (few) limitations, rather than data rights and the processing of subjects’ personal information.  Nonetheless, there are some ways in which our jurisprudence may be interpreted as supporting some rights to restrict the posting and continued publication of certain online content.

This article addresses the problem by using the fact that the average person does not find fault with information continuing to be located on certain websites were it not for the ease with which that information is discoverable via searching.  So the true problem that most people have with sites’ refusal to “take down” certain information is that it shows up in response to searches – primarily Google searches.  So I approach a right to deletion online by concentrating on the role of search engines in keeping alive information others would prefer to become “practically obscure.”

This article proposes a compromise whereby a notice-and-take down system similar to that for copyright violations would allow individuals to request that search engines cease to prominently place certain information in their “results” on the basis of one or more of the following reasons:  (1) the information is no longer “newsworthy” based on its age, taking into account in particular events or information concerning a youth that are less relevant years later; (2) the information borders on defamation or false light publication based on subsequent events, such as the acquittal of a personal charged with a crime, or a finding of no liability of an entity sued for tortuous misconduct; (3) the information is unduly harmful, such as that resulting in bullying or stalking; (4) the information is untrue or defamatory; or (5) any other reason for which the continued high placement of the information subjects a person or entity to unfair prejudice.

The fact is that Google is already responding to requests to take down information outside the intellectual property realm.  But it is responding to those requests in an opaque manner based on its own internal views of what requests are proper or not.  There should be guidelines for those decisions so that they are not based on bias, identity of the requester, or happenstance.

The benefits of this proposal include the fact that, because it applies suggested guidelines, it avoids the constitutional problem of requiring deletion.  The proposal is less logistically difficult to implement than requiring removal of information from all websites, because it only guides the action that Google already takes in response to take-down requests.  Finally, while the proposal falls well short of requiring erasure like the EU’s proposal of a right to be forgotten, it addresses the primary concern of most people who seek such deletion – decreasing the prominence of such information in response to a search request.

Amy Gajda, The First Amendment Bubble: Legal Limits on News and Information in an Age of Over-Exposure

Amy Gajda, The First Amendment Bubble:  Legal Limits on News and Information in an Age of Over-Exposure

Comment by: Samantha Barbas

PLSC 2013

Workshop draft abstract:

In Fall of 2012, magazines and websites published clandestine nude photographs of Kate Middleton, Duchess of Cambridge, passages from deceased ambassador Christopher Stevens’ personal diary pilfered by CNN reporters at the scene of the ransacked consulate in Libya, and hidden camera video of wrestler Hulk Hogan engaging in graphic sexual activity with a friend’s wife.

We live today in an age of over-exposure in media, bombarded with images and information once thought inappropriate for public consumption, much of it self-published.  The feed of internet postings and other publications have combined with significant changes in media practices to fuel a sense that, when it comes to public discourse, anything goes, and that media is only too happy to facilitate.

These changes are undermining the constitutional sensibility that has protected press rights and access to information for the better part of the last century.  That sensibility recognized that privacy interests came second to the public interest in newsworthy truthful information, and it trusted journalists to regulate themselves in deciding what qualified as news.  Today, in an environment in which journalists and quasi-journalists seem ever less inclined to restrain themselves in indulging the public appetite for information, however scandalous or titillating, that bargain seems increasingly naïve.  And courts are beginning to show new muscle in protecting persons from media invasions by imposing their own sense of the proper boundaries of news and other truthful public disclosures.  The First Amendment bubble, enlarged by an expanding universe of claims to protection by traditional media, internet ventures, and “citizen journalists,” could burst.

Alessandro Acquisti, Laura Brandimarte, and Jeff Hancock, Are There Evolutionary Roots To Privacy Concerns?

Alessandro Acquisti, Laura Brandimarte, and Jeff Hancock, Are There Evolutionary Roots To Privacy Concerns?

Comment by: Dawn Schrader

PLSC 2013

Workshop draft abstract:

We present a series of experiments aimed at investigating potential evolutionary roots of privacy concerns.

Numerous factors determine our different reactions to offline and online threats. An act that appears inappropriate in one context (watching somebody undressing in their bedroom) is natural in another (on the beach); the physical threat of a stranger following us in the street is more ominous than the worst consequences of an advertiser knowing what we do online; common sense and social conventions tell us that genuine Rolexes are not sold at street corners – but fake Bank of America websites are found at what seem like the right URLs. There is, however, one crucial parallel that connects the scenarios we just described: our responses to threats in the physical world are sensitive to stimuli which we have evolved to recognize as signals of danger. Those signals are absent, subdued, or manipulated, in cyberspace. The “evolutionary” conjecture we posit and experimentally investigate is that privacy (as well as security) decision making in cyberspace may be inherently more difficult than privacy and security decision making in the physical world, because – among other reasons – online we lack, or are less exposed to, the stimuli we have evolved to employ offline as means of detection of potential threats.

Through a series of lab experiments, we are investigating this conjecture indirectly, by measuring the impact that the presence, absence, or changes to an array of stimuli in the physical world (which are mostly unconsciously processed) will have on security and privacy behavior in cyberspace.

 

Our approach focuses on the mostly unconsciously processed stimuli that influence security and privacy behavior in the offline world, and is posited on an evolutionary conjecture: Human beings have evolved sensorial systems selected to detect and recognize threats in their environment via physical, “external” stimuli. These stimuli, or cues, often carry information about the presence of others in one’s space or territory. The evolutionary advantages of being able to process and react to such stimuli are clear: by using these signals to assess threats in their physical proximity, humans reduce the chance of being preyed upon (Darwin, 1859; Schaller, Faulkner, Park, Neuberg & Kenrick, 2005). Under this conjecture, the modern, pre-information age notion of privacy may be an evolutionary by-product of the search for security. Such evolutionary explanation for privacy concerns may help explain why – despite the wide and diverse array of privacy attitudes and behaviors across time and geography – evidence of a desire for privacy, broadly constructed, can be found across most cultures. Furthermore, since in cyberspace, those signals are absent, subdued, or manipulated, generating an evolutionary “deficit,” such an evolutionary story may explain why privacy concerns that would normally be activated in the offline world are suppressed online, and defense behaviors are hampered.

The research we are conducting, therefore, combines lessons from disciplines that have been recently applied to privacy and security (such as usability, economics, or behavioral decision research) with lessons and methodologies from evolutionary psychology (Buss, 1991, 1995). While this gendered, evolutionary perspective is not without criticism, it can explain several patterns in online dating behavior. Women, for example, are more likely to include dated and otherwise deceptive photos in their profile than men (Hancock & Toma, 2009). Physical attractiveness also plays a role, with attractive daters lying less in their profiles and judging those who do lie more harshly than unattractive daters (Toma & Hancock, 2010). Indeed, extant cyber-research has been criticized for ignoring the evolutionary pressures that may shape online behaviors (see Kock, 2004), such as humans’ ability to cognitively adapt to new media, and their evolutionary preferences for certain media characteristics (e.g., synchronicity, collocation).

While we cannot directly test the evolutionary conjecture that the absence of stimuli, which humans have evolved to detect for assessing threats (including cues to the presence of other humans), contributes to our propensity to fall for cyberattacks or online privacy violations, we can test, through a series of human subjects experiments we have started piloting, how the presence, absence, or modifications in an array of stimuli in the physical world affect security and privacy behavior in cyberspace. The term “stimuli,” in the parlance of this proposal, is akin to the term “cues” as used in psychology and cognitive science. Our experiments focus on three types of such stimuli:

S1)    sensorial stimuli: auditory, visual, olfactory cues of the physical proximity of other human beings;
S2)    environmental stimuli: cues that signal to an individual certain characteristics of the physical environment in which the individual is located, such as crowdedness or familiarity;
S3)    observability stimuli: cues that signal whether the individual is possibly being surveilled.

The three categories are not meant as mutually exclusive (for instance, it is through our senses that we receive cues about the environment). Our experiments capture how manipulations of the stimuli in the physical environment of the subject influence both her privacy behavior in cyberspace. Privacy behavior is operationalized in terms of individuals’ propensity to disclose personal or sensitive information, as in previous experiments by the authors.

Kenneth Bamberger and Deirdre Mulligan, Privacy in Europe: Initial Data on Governance Choices and Corporate Practices

Kenneth Bamberger and Deirdre Mulligan, Privacy in Europe: Initial Data on Governance Choices and Corporate Practices

Comment by: Dennis Hirsch

PLSC 2013

Workshop draft abstract: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2328877

Privacy governance is at a crossroads.  In light of the digital explosion, policymakers in North America and Europe are revisiting regulation of the corporate treatment of information privacy.  The recent celebration of the thirtieth anniversary of the Organization for Economic Cooperation and Development’s (“OECD”) Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,[1] the first international statement of fair information practice principles, sparked an international review of the guidelines to identify areas for revision.  Work by national data privacy regulators reviewing the E.U. Data Protection Directive in turn have suggested alternative regulatory models oriented around outcomes.[2]  The European Commission is actively debating the terms of a new Privacy Regulation.[3]  And Congress, the FTC, and the current U.S. presidential administration have signaled a commitment to deep reexamination of the current regulatory structure, and a desire for new models.[4]

These efforts, however, have lacked critical information necessary for reform.  Scholarship and advocacy around privacy regulation has focused almost entirely on law “on the books”—legal texts enacted by legislatures or promulgated by agencies.  By contrast, the debate has strangely ignored privacy “on the ground” – the ways in which corporations in different countries have operationalized privacy protection in the light of divergent formal laws; interpretive, organizational and enforcement decisions made by local administrative agencies; and other jurisdiction-specific social, cultural and legal forces.

Since 1994, when such a study examined the U.S. privacy landscape,[5] no sustained inquiry has been conducted into how corporations actually manage privacy in the shadow of formal legal mandates.  No one, moreover, has ever engaged in such a comparative inquiry across jurisdictions.  Indeed, despite wide international variation in approach, even the last detailed comparative account of enforcement practices occurred over two decades ago.[6]  Thus policy reform efforts progress largely without a real understanding of the ways in which previous regulatory attempts have actually promoted, or thwarted, privacy’s protection.

This article is the third documenting a project intended to fill this gap – and at a critical juncture.  The project uses qualitative empirical inquiry—including interviews with and surveys of corporate privacy officers, regulators, and other actors within the privacy field—to identify the ways in which privacy protection is implemented on the ground, and the combination of social, market, and regulatory forces that drive these choices.  And it offers a comparative analysis of the effects of different regulatory approaches adopted by a diversity of OECD nations, taking advantage of the living laboratory created by variations in national implementation of data protection, an environment that can support comparative, in-the-wild assessments of their ongoing efficacy and appropriateness.

While the first two articles in this series discussed research documenting the implementation of privacy in the United States,[7] this article presents the first analysis of data of its kind from Europe, reflecting research and interviews in three EU jurisdictions: Germany, Spain, and France.

The article reflects only the first take at this recently-gathered data; the analysis is not comprehensive, and the lessons drawn at this stage are necessarily tentative.  A complete consideration of the research on the privacy experience in five countries (the US, Germany, France, Spain, and the UK) – one which more generally draws lessons for broader research on paradigms for thinking about privacy, the effectiveness of corporate practices informed by those paradigms, and organizational compliance with different forms of regulation and other external norms more generally – will appear in an upcoming book-length treatment.[8]

Yet this article offers as-yet unavailable data about the European privacy landscape at a critical juncture – the moment at which the policymakers are engaged in important decisions about which regulatory structures to expand to all EU member states, and which to leave behind; and about how those individual states will structure the administrative agencies governing data privacy moving forward; and about strategies those agencies will adopt regarding legal enforcement, the development of expertise within both the government and firms, and the ways that other participants within the privacy “field”[9]—the constellation of organizational actors participating in the construction of legal meaning in a particular domain –will (or will not) best be enlisted to shape corporate decisionmaking and ultimately privacy outcomes.

Setting the context for this analysis, Part I of this Article describes the dominant narratives regarding the regulation of privacy in the United States and the Europe Union – accounts that have occupied privacy scholarship and advocacy for over a decade. Part II summarizes our project to develop more granular accounts of the privacy landscape, and the resulting scholarship’s analyses of privacy “on the ground” in the U.S.  Informed by these analyses, Part III presents the results of our research regarding corporate perception and implementation of privacy requirements in three European jurisdictions, Germany, Spain and France, and placing them within the theoretical framework regarding emerging best practices in the U.S.  Not surprisingly for those familiar with privacy protection in the Europe, these results reveal widely varying privacy landscapes, all within the formal governance of a single legal framework: the 1995 EU Privacy Directive.   More striking, however, are the granular differences between the European jurisdictions and the similarities in both the language in which privacy is discussed, and the particular mandates and institutions shaping privacy’s governance, the architecture for privacy protection and decisionmaking between German and U.S. firms. This Part then seeks to understand the construction of the privacy “field” that shapes these differing country landscapes.  Such inquiry includes the details of national implementation of the EU directive – including the specificity and type of requirements placed on regulated parties, the content of regulation, with particular attention to the comparative focus on process-based as opposed to substantive mandates, and the use of ex ante guidance as opposed to prosecution and enforcement – as well as the structure and approach of the relevant data protection agency, including the size and organization of the staff, the level to which they rely on technical and legal “experts” inside the agency, rather than inside the companies they regulate; the use of enforcement and inspections; and the manner in which regulators and firms interact more generally.  Yet it also includes an understanding of factors beyond privacy regulation itself, including other legal mandates, elements characteristic of national corporate structure, and societal factors, such as the roles of the media and other citizen, industry, labor, or professional organizations that determine the “social license” that governs a corporation’s freedom to act.

Finally, the Article’s Part IV outlines two elements of a new account of privacy’s development, informed by comparative analysis.  First, based on the data from four jurisdictions, it engages in a preliminary analysis regarding which elements of these privacy fields our interviews and other data suggest have fostered, catalyzed and permitted the most adaptive responses in the face of novel challenges to privacy.  Second, it suggests something important about the role of professional networks in the diffusion of practices across jurisdictional lines in the face of important social and technological change.  The adaptability of distinct regulatory approaches and institutions in the face of novel challenges to privacy has never been more important.  Our comparative analysis provides novel insight into the ways that different regulatory choices have interacted with other aspects of the privacy field to shape corporate behavior, offering important insights for all participants in policy debates about the governance of privacy.


[1] See The 30th Anniversary of the OECD Privacy Guidelines, OECD, www.oecd.org/sti/privacyanniversary (last visited Jan. 22, 2013).

[2]   See, e.g., Neil Robinson et al., RAND Eur., Review of the European Data Protection Directive (2009).

[3]   See, e.g., Konrad Lischka & Christian Stöcker, Data Protection: All You Need to Know about the EU Privacy Debate, Spiegel Online (Jan. 18, 2013, 10:15 AM), http://www.spiegel.de/international/europe/the-european-union-closes-in-on-data-privacy-legislation-a-877973.html.

[4]   See, e.g., Adam Popescu, Congress Sets Sights On Fixing Privacy Rights, readwrite (Jan. 18, 2013), http://readwrite.com/2013/01/18/new-congress-privacy-agenda-unvelied; F.T.C. and White House Push for Online Privacy Laws, N.Y. Times, May 10, 2012, at B8, available at http://www.nytimes.com/2012/05/10/business/ftc-and-white-house-push-for-online-privacy-laws.html?_r=0; Fed. Trade Commission, Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers (Mar. 2012), available at http://www.ftc.gov/os/2012/03/120326privacyreport.pdf.

[5] See H. Jeff Smith, Managing Privacy: Information Technology and Corporate America (1994).

[6]   See David H. Flaherty, Protecting Privacy in Surveillance Societies: The Federal Republic of Germany, Sweden, France, Canada, & the United States (1989).

[7] See Kenneth A. Bamberger & Deirdre K. Mulligan, New Governance, Chief Privacy Officers, and the Corporate Management of Information Privacy in the United States: An Initial Inquiry, 33 Law & Pol’y 477 (2011); Kenneth A. Bamberger & Deirdre K. Mulligan, Privacy on the Books and on the Ground, 63 Stan. L. Rev. 247 (2011).

[8] Kenneth A. Bamberger & Deirdre K. Mulligan, Catalyzing Privacy: Lessons From Regulatory Choices and Corporate Decisions on Both Sides of the Atlantic (MIT Press: forthcoming 2014)

[9].     See Paul J. DiMaggio & Walter W. Powell, The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields, 48 Am. Soc. Rev. 147, 148 (1983) (defining an organizational field as “those organizations that, in the aggregate, constitute a recognized area of institutional life: key suppliers, resource and product consumers, regulatory agencies, and other organizations that produce similar services or products.”); Lauren B. Edelman, Overlapping Fields and Constructed Legalities: The Endogeneity of Law, in Private Equity, Corporate Governance and the Dynamics of Capital Market Regulation 55, 58 (Justin O’Brien ed., 2007) (defining a legal field as “the environment within which legal institutions and legal actors interact and in which conceptions of legality and compliance evolve”).

Joel Reidenberg, Privacy in Public

Joel Reidenberg, Privacy in Public

Comment by: Franziska Boehm

PLSC 2013

Workshop draft abstract:

The existence and contours of privacy in public are in a state of both constitutional and societal confusion.  As the concurrences in U.S. v. Jones suggested, technological capabilities and deployments undermine the meaning and value of the 4th Amendment’s third-party and ‘reasonable expectation of privacy’ doctrines.  The paper argues that the conceptual problem derives from the evolution of four stages of development in the public nature of personal information.  In the first stage, the obscurity of information in public provided protection for privacy and an expectation of privacy.  In the second stage, accessibility begins to erode the protection afforded by obscurity.  The third stage, complete transparency of information in public, erases any protection through obscurity and undercuts any privacy expectations.  Finally, the fourth stage, publicity of information, or the affirmative disclosure and dissemination of information, destroys traditional notions of protection and expectations.   At the same time, publicity without privacy protection undermines constitutional values of public safety and fair governance.   The paper argues that activity in public needs to have privacy protection framed in terms of ‘public regarding’ and ‘non-public regarding’ acts.

Jane Bambauer and Derek Bambauer, Vanished

Jane Bambauer and Derek Bambauer, Vanished

Comment by: Eric Goldman

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2326236

Workshop draft abstract:

The conventional wisdom on Internet censorship assumes that the United States government makes fewer attempts to remove and delist content from the Internet than other democracies. Likewise, democratic governments are believed to make fewer attempts to control on-line content than the governments of non-democratic countries. These assumptions are theoretically sound: most democracies have express commitments to the freedom of speech and communication, and the United States has exceptionally strong legal immunities for Internet content providers, along with judicial protection of free speech rights that make it unique even among democracies. However, the conventional wisdom is not entirely correct. A country’s system of governance does not predict well how it will seek to regulate on-line material. And democracies, including the United States, engage in far more extensive censorship of Internet communication than is commonly believed.

This Article explores the gap between free speech rhetoric and practice by analyzing data recently released by Google that describes the official requests or demands to remove content made to the company by a government between 2010 and 2012. Controlling for Internet penetration and Google’s relative market share in each country, we examine international trends in the content removal demands. Specifically, we explore whether some countries have a propensity to use unenforceable requests or demands to remove content, and whether these types of extra-legal requests have increased over time. We also examine trends within content categories to reveal the differences in priorities among governments. For example, European Union governments more frequently seek to remove content for privacy reasons. More surprisingly, the United States government makes many more demands to remove content for defamation, even after controlling for population and Internet penetration.

The Article pays particular attention to government requests to remove content based upon claims regarding privacy, defamation, and copyright enforcement. We make use of more detailed data prepared specially for our study that shows an increase in privacy-related requests following the European Commission’s draft proposal to create a Right To Be Forgotten.

Heather Patterson and Helen Nissenbaum, Context-Dependent Expectations of Privacy in Self-Generated Mobile Health Data

Heather Patterson and Helen Nissenbaum, Context-Dependent Expectations of Privacy in Self-Generated Mobile Health Data

Comment by: Katie Shilton

PLSC 2013

Workshop draft abstract:

Rapid developments in health self-quantification via ubiquitous computing point to a future in which individuals will collect health-relevant information using smart phone apps and health sensors, and share that data online for purposes of self-experimentation, community building, and research. However, online disclosures of intimate bodily details coupled with growing contemporary practices of data mining and profiling may lead to radically inappropriate flows of fitness, personal habit, and mental health information, potentially jeopardizing individuals’ social status, insurability, and employment opportunities. In the absence of clear statutory or regulatory protections for self-generated health information, its privacy and security rest heavily on robust individual data management practices, which in turn rest on users’ understandings of information flows, legal protections, and commercial terms of service. Currently, little is known about how individuals understand their privacy rights in self-generated health data under existing laws or commercial policies, or how their beliefs guide their information management practices. In this qualitative research study, we interview users of popular self-quantification fitness and wellness services, such as Fitbit, to learn (1) how self-tracking individuals understand their privacy rights in self-generated health information versus clinically generated medical information; (2) how user beliefs about perceived privacy protections and information flows guide their data management practices; and (3) whether commercial and clinical data distribution practices violate users’ context-dependent informational norms regarding access to intimate details about health and personal well-being. Understanding information sharing attitudes, behaviors, and practices among self-quantifying individuals will extend current conceptions of context-dependent information flows to a new and developing health-related environment, and may promote appropriately privacy-protective health IT tools, practices, and policies among sensor and app developers and policy makers.

David Thaw, Criminalizing Hacking, Not Dating: Reconstructing the CFAA Intent Requirement

David Thaw, Criminalizing Hacking, Not Dating: Reconstructing the CFAA Intent Requirement

Comment by: Jody Blanke

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2226176

Workshop draft abstract:

The Computer Fraud and Abuse Act (CFAA) originally was enacted as a response to a growing threat of electronic crimes, a threat which continues to grow rapidly.  Congress, to address concerns about hacking and cybercrime, criminalized unauthorized access to computer systems through the CFAA.  The  statute poorly defines this threshold concept of “unauthorized access,” however, resulting in widely varied judicial interpretation.  While this issue is perhaps still under-examined, the bulk of existing scholarship generally agrees that an overly broad interpretation of unauthorized access — specifically one that allows private contract unlimited freedom to define authorization — creates a constitutionally-impermissible result.  Existing scholarship, however, lacks workable solutions.  The most notable approach, prohibiting contracts of adhesion (e.g., website “Terms of Service”) from defining authorized access, strips system operators of their ability to post the virtual equivalent of “no trespassing” signs and set enforceable limits on the (ab)use of their private property.

This Article considers an alternative approach, based on examination of what is likely the root cause of vagueness and overbreadth problems in the CFAA — a poorly constructed mens rea element.  It argues that judicial interpretation may not be sufficient to effect Congressional intent concerning the CFAA, and argues for legislative reconstruction of the mens rea requirement requiring a strong nexus between an individual’s intent and the unique computer-based harm sought to be prevented.  The Article proposes a two-part conjunctive test:  first, that an individual’s intent must not only be to engage in an action (which technically results in unauthorized access), but that the intent must itself be to engage in unauthorized access; and second, that the resultant actions must be in furtherance either of an (enumerated) computer-specific malicious action or of an otherwise-unlawful act.  While courts may be able to reinterpret the statute to accomplish the first part, this still leaves substantial potential for private agreements to create vagueness and overbreadth problems.  The second part of the test mitigates this risk, and thus Congressional intervention is required to save both the validity of the statute as well as the important protections it affords.

Peter Winn, The Protestant Origins of the Anglo-American Right to Privacy

Peter Winn, The Protestant Origins of the Anglo-American Right to Privacy

Comment by: Andrew Odlyzko

PLSC 2013

Workshop draft abstract:

In 1606 Attorney General Edward Coke and Chief Justice of the Kings Bench Sir John Popham, at the request of Parliament and the King’s Council, issued an opinion addressing the narrow question of when an ecclesiastical officer was authorized to administer the “oath ex officio” during proceedings at cannon law.  They held that, except in very narrow circumstances, the accused in such proceedings could not be compelled to take such an oath and testify against himself.  This opinion, representing a clear break from earlier medieval practice where such procedures were common and unexceptionable, is traditionally understood as one of the great landmarks which eventually resulted in the establishment of a right to remain silent, now embodied in the Fifth Amendment of the U.S. Constitution.  In this article, I argue that placed in its proper historical context, the Coke & Popham opinion also recognizes an enforceable legal right of privacy—a right of privacy to one’s thoughts.  Today, the right to keep one’s thoughts to oneself is so ingrained in our understanding of the world, that it is difficult to imagine how radical this idea was at the time.  But in the medieval period, it was taken for granted that the jurisdiction of the authorities extended to the utmost limits of the human mind.  Furthermore, at the time Coke and Popham wrote, the most important affairs of the state were ecclesiastical in nature; and prosecution of the crime of heresy was as much a concern of the civil as the religious authorities.  Although the holding of the opinion made it more difficult to prosecute heresy, the authors of the opinion were by no means soft on heretics.  Furthermore, by limiting the jurisdiction of ecclesiastical authorities, in a country where the King was also the head of the Church, Coke and Popham were also limiting the power of the sovereign state itself.  The opinion thus recognized in a very limited way the legal right of an individual to control access to a private sphere beyond the jurisdiction of the sovereign; a development which begins the process of establishing what Brandeis was later to call, the “right to be let alone.”  This important step in the law was not driven by utilitarian rationale (nothing could be more effective a means to prosecute heretics than administration of the oath); nor was it compelled by earlier medieval precedents (the authors tortured medieval case law to reach the desired outcome).  But in the text of the opinion, itself, one can see what drove Coke and Popham to what at the time was such a counterintuitive result—the remorseless logic of a quintessentially Protestant theology.  The authors were concerned that in a panoptic state with the power to intrude into an individual thoughts, the first victim would be the authenticity of individual conscience, which, according to Protestant teaching, was so critically necessary for religious salvation.

Elizabeth Joh, Privacy Protests: Surveillance Evasion and Fourth Amendment Suspicion

Elizabeth Joh, Privacy Protests: Surveillance Evasion and Fourth Amendment Suspicion

Comment by: Tim Casey

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2285095

Workshop draft abstract:

To the police, evading surveillance is strong evidence that you’re a criminal; the problem is that the evasion may only be a protest against the surveillance itself.   How do we tell the difference, and why does it matter?

Surprisingly, these questions have not attracted serious attention by judges or legal commentators.   It is surprising because the means of surveillance have become ever more sophisticated and difficult to avoid.  If you want to track someone down, you can discover a surprising amount of information with increasing ease.  Sophisticated technologies have made the collection of data, verification of identity, and prediction of behavior simpler and faster.  These technologies have also greatly improved the capabilities of police investigations.  The police have added thermal imaging cameras, global positioning satellite trackers, cell phone site data, computer surveillance software, and DNA swabs.

But some people resist these incursions and take steps to thwart police surveillance out of ideological belief or personal conviction.   Instructions and products are readily available on the internet.  Use photoblocker film on a license plate or a ski mask to stop a red-light camera.   Avoid ordinary credit cards and choose only cash or prepaid credit cards to make a financial trail harder to detect.  Avoid cellphones unless they are prepaid phones or “freedom phones” from Asia that have all tracking devices removed.   Avoid using email unless you use disposable “guerilla email” addresses which disappear within an hour.  Use “spoof cards” that mask your identity on caller id devices.   Burn your garbage to hamper investigations of your financial records or genetic evidence.   A professional can alter your digital self on the internet by erasing data or posting multiple false identities.   At the extreme end, you could live “off the grid” and cut off all contact with the modern world.

These are all examples of what I call privacy protests: actions individuals take to block or to thwart surveillance from the police for reasons that are unrelated to criminal wrongdoing.   Unlike people who hide their activities because they have committed a crime, those engaged in privacy protests do so primarily because they object to the presence of perceived or potential government surveillance in their lives.

Privacy protests are easily grouped together with the evasive actions taken by those who have committed crimes.   The evasion of police surveillance can look the same whether perpetrated by a criminal or a privacy protestor.  For this reason, privacy protests against the police in particular and the government in general are largely underappreciated within the criminal law literature.

This article aims to document privacy protests as well to discuss how the police and the Fourth Amendment fail to take them into account.   These individual actions demonstrate that the boundaries of privacy and legitimate governmental action are the product of a dynamic process.  A more comprehensive account of privacy must consider not only the attempts of individuals to exert control over their own information, lives, and personal spaces, but the ways in which they also take active countermeasures against the government and other private actors to thwart attempts at surveillance.