Archives

Jef Ausloos, The Right to Erasure: Reconfiguring the Power Equilibrium over Personal Data

Jef Ausloos, The Right to Erasure: Reconfiguring the Power Equilibrium over Personal Data

Comment by: Paul Bernal

PLSC 2013

Workshop draft abstract:

As people increasingly live (parts of) their life online, the collection and processing of personal data has grown exponentially over the last few decades. More importantly, recent history seems to suggest that the power over personal data seems to consolidate in the hands of just a few (corporate) players. From the early beginning, one of the most important rationales behind the European data protection framework was to protect individuals from such powerful data controlling entities. Over the years, a compromise framework was achieved with the 1995 Data Protection Directive, which empowers individuals vis-à-vis data controllers while at the same time providing some fundamental safeguards that cannot be negotiated. Almost two decades later, however, one might ask to what extent the aim to ‘equalise bargaining positions’ is still achieved. The subject of this paper will be to investigate the extent to which a right to erasure would (not) contribute to a more equitable power balance over personal data. This is a question that has largely been ignored in (trans-Atlantic) debates on the so-called Right to be Forgotten (umbrella term which also comprises the Right to Erasure).

2013 Participants

Linda Ackerman,
Privacy Activism

Alessandro Acquisti,
CMU

Meg Ambrose,
University of Colorado

Julia Angwin,
Times Books

Annie Anton,
Georgia Tech School of Interactive Computing

Axel Arnbak,
Institute for Information Law

Jef Ausloos,
University of Leuven, ICRI – iMinds

Ian Ballon,
Greenberg Traurig, LLP

Jane Bambauer,
University of Arizona James E. Rogers College of Law

Derek Bambauer,
University of Arizona James E. Rogers College of Law

Ken Bamberger,
University of California, Berkeley

Kevin Bankston,
Center for Democracy & Technology

Samantha Barbas,
SUNY Buffalo Law School

Solon Barocas,
New York University

Liza Barry-Kessler,
Gonzalez, Saggio & Harlan

Carol Bast,
University of Central Florida

Steven Bellovin,
Columbia University

Laura Berger,
Federal Trade Commission

Paul Bernal,
University of East Anglia, Norwich, UK

Ryan Biava,
University of Wisconsin-Madison

Jody Blanke,
Mercer University

Matt Blaze,
University of Pennsylvania

Marc Blitz,
Oklahoma City University

Franziska Boehm,
University of Münster

Courtney Bowman,
Palantir

Benedikt Burger,
trainee lawyer, North Rhine-Westphalia (Germany)

Ryan Calo,
University of Washington School of Law

Jean Camp,
Indiana University

Tim Casey,
California Western School of Law

Anupam Chander,
UC Davis

Bryan Choi,
Yale Information Society Project

Wade Chumney,
Georgia Institute of Technology

Danielle Citron,
University of Maryland School of Law

Andrew Clearwater,
Center for Law and Innovation

Amanda Conley,
O’Melveny & Myers

Chris Conley,
ACLU of Northern California

Kate Crawford,
University of New South Wales/Microsoft Research

Catherine Crump,
American Civil Liberties Union Foundation

Bryan Cunningham,
Palantir Technologies

Doug Curling,
New Kent Capital

Christopher Cwalina,
Holland & Knight

Anjali Dalal,
Yale Law School

Jamela Debelak,
ACLU of Washington

Judith Decew,
Clark University, Worcester, MA

Michelle Dennedy,
Chris Hoofnagle

Deven Desai,
Thomas Jefferson School of Law

Will Devries,
Google Inc.

Lorrainne Dixon,
Oracle Microsystems (BC)

Pam Dixon,
World Privacy Forum

Dissent Doe,
PogoWasRight.org

Nick Doty,
UC Berkeley, School of Information

Rossana Ducato,
Law Faculty, University of Trento (Italy)

Cynthia Dwork,
Microsoft Research

Catherine Dwyer,
Pace University

Mark Eckenwiler,
Perkins Coie LLP

Lilian Edwards,
Strathclyde University

Yan Fang,
Federal Trade Commission

Adrienne Felt,
Google

Roger Ford,
University of Chicago Law School

Tanya Forsheit,
InfoLawGroup LLP

Valita Fredland,
Indiana University Health, Inc.

Susan Freiwald,
University of San Francisco

Paul Frisch,
University of Oregon School of Law

Michael Froomkin,
U. Miami School of Law

Amy Gajda,
Tulane University Law School

Michael Geist,
University of Ottawa

Robert Gellman,
Privacy Consultant

Barton Gellman,
Woodrow Wilson School, Princeton University

Lauren Gelman,
BlurryEdge Strategies

Jan Gerlach,
University of St.Gallen (Switzerland)

Dorothy Glancy,
Santa Clara University School of Law

Eric Goldman,
Santa Clara University School of Law

Nathan Good,
good research

Jennifer Granick,
Stanford Law School Center for Internet and Society

John Grant,
Palantir Technologies

Eloise Gratton,
McMillan LLP

Jim Graves,
Carngie Mellon University

David Gray,
University of Maryland School of Law

Rebecca Green,
William & Mary Law School

Seda Gurses,
KU Leuven

Patrick Hagan,
Deloitte Security & Privacy

Joseph Hall,
Center for Democracy & Technology

Edina Harbinja,
University of Strathclyde, Glasgow, UK

Woodrow Hartzog,
Samford University’s Cumberland School of Law

Allyson Haynes Stuart,
Charleston School of Law

Paula Helm,
University of Passau, DFG Training Group “Privacy”

Stephen Henderson,
The University of Oklahoma

Mike Hintze,
Microsoft Corporation

Dennis Hirsch,
Capital Law School

Jaap-Henk Hoepman,
Radboud University Nijmegen

Lance Hoffman,
The George Washington University

Marcia Hofmann,
Electronic Frontier Foundation

Sophie Hood,
New York University

Chris Hoofnagle,
UC Berkeley Law

Margaret Hu,
Duke Law School

Trevor Hughes,
IAPP

Renee Hutchins,
University of Maryland School of Law

Elizabeth Joh,
University of California, Davis, School of Law

Elizabeth Johnson,
Poyner Spruill LLP

D.R. Jones,
University of Memphis Cecil C. Humphreys School of Law

Margot Kaminski,
Information Society Project at Yale Law School

Orin Kerr,
George Washington University Law School

Ian Kerr,
University of Ottawa, Faculty of Law

Jennifer King,
UC Berkeley School of Information

Anne Klinefelter,
University of North Carolina

Tracy Ann Kosa,
Microsoft

Rick Kunkel,
University of St. Thomas

Susan Landau,
Privacyink.org

Claudia Langer,
Karlsruhe Institute of Technology and Saarland University, Germany

Stephen Lau,
University of California, Office of the President

Travis Leblanc,
Office of California Attorney General Kamala D. Harris

Ronald Lee,
Arnold & Porter LLP

Pedro Leon,
Carnegie Mellon University

Avner Levin,
Privacy and Cyber Crime Institute, Ryerson University

Eden Litt,
Northwestern University

Jennifer Lynch,
Electronic Frontier Foundation

Mark Maccarthy,
Georgetown University

Tobias Mahler,
Norwegian Research Center for Computers and Law

Sona Makker,
Law Student

Laureli Mallek,
Attorney & CIPP/US

Carter Manny,
University of Southern Maine

Kirsten Martin,
George Washington University

Alice Marwick,
Fordham University

Aaron Massey,
Georgia Institute of Technology

Kristen Mathews,
Proskauer Rose LLP

Andrea Matwyshyn,
The Wharton School, University of Pennsylvania

Aleecia Mcdonald,
Stanford CIS

William Mcgeveran,
University of Minnesota Law School

Anne Mckenna,
Silverman/Thompson

Joanne Mcnabb,
California Attorney General’s Office

Edward Mcnicholas,
Sidley Austin LLP

Lea Mekhneche,
University of California, Berkeley

Adam Miller,
California Department of Justice

Kate Miltner,
Microsoft Research New England (Social Media Collective)

Tracy Mitrano,
Cornell University

Julia Maria Moenig,
University of Passau, Passau, Germany

Manas Mohapatra,
Federal Trade Commission

Caren Morrison,
Georgia State University College of Law

Laura Moy,
Institute for Public Representation

Deirdre Mulligan,
School of Information and BCLT, UC Berkeley

Scott Mulligan,
Skidmore College

Arvind Narayanan,
Princeton University

Helen Nissenbaum,
New York University

Gregory Nojeim,
Center for Democracy & Technology

Andrew Odlyzko,
University of Minnesota

Al Ogata,
Hawaii Medical Service Association

Paul Ohm,
University of Colorado Law School

Nicole Ozer,
ACLU of California

Heather Patterson,
New York University

Stephanie Pell,
SKP Strategies, LLC

Scott Peppet,
University of Colorado Law School

Randy Picker,
University of Chicago Law School

Tamara Piety,
University of Tulsa College of Law

Vince Polley,
KnowConnect PLLC

Jules Polonetsky,
Future of Privacy Forum

Judith Rauhofer,
University of Edinburgh

Alan Raul,
Sidley Austin LLP

Kriss Ravetto,
UC Davis, Cinema and Technocultural Studies

Priscilla Regan,
Department of Public and International Affairs, George Mason University

Joel Reidenberg,
Fordham University School of Law

Neil Richards,
Washington University School of Law

David Robinson,
Information Society Project at Yale Law School

Thomas Roessler,
W3C

Sasha Romanosky,
New York University

Stewart Room,
Law Society of England & Wales

Arnold Roosendaal,
TNO Strategy and Policy for the Information Society

Larry Rosenthal,
Chapman University School of Law

Alan Rubel,
University of Wisconsin, Madison

Ira Rubinstein,
NYU School of Law

James Rule,
Center for the Study of Law and Society, UC Berkeley

Pamela Samuelson,
Berkeley Law School

Julian Sanchez,
Cato Institute

Barbara Sandfuchs,
University of Passau, Germany

Albert Scherr,
UNH School of Law

Stacey Schesser,
Office of California Attorney General

Dawn Schrader,
Cornell University

Russell Schrader,
Visa

Sarah Schroeder,
Federal Trade Commission

Jason Schultz,
UC Berkeley School of Law

Paul Schwartz,
Berkeley Law

Victoria Schwartz,
The University of Chicago Law School

Galina Schwartz,
EECS, UC-berkeley

Andrew Selbst,
U.S. District Court

Wendy Seltzer,
World Wide Web Consortium (W3C)

Junichi Semitsu,
University of San Diego School of Law

Stuart Shapiro,
MITRE Corporation

Katie Shilton,
University of Maryland, College Park

Babak Siavoshy,
UC Berkeley Law

David Sklansky,
University of California, Berkeley

Robert Sloan,
University of Illinois at Chicago

Christopher Slobogin,
Vanderbilt

Christopher Soghoian,
American Civil Liberties Union

Daniel Solove,
George Washington Law School

Ashkan Soltani,

Kelly Sorensen,
Ursinus College

Robert Sprague,
University of Wyoming

Jay Stanley,
ACLU

Jeffrey Steele,
California Department of Justice

Gerard Stegmaier,
Wilson Sonsini Goodrich & Rosati

Amie Stepanovich,
EPIC

Lior Strahilevitz,
University of Chicago

Clare Sullivan,
Law School, University of South Australia

Harry Surden,
University of Colorado Law School

Latanya Sweeney,
Harvard University

Peter Swire,
Ohio State University

Rahul Telang,
Carnegie Mellon University

Omer Tene,
Israeli College of Management School of Law

David Thaw,
University of Connecticut School of Law

Frank Torres,
Microsoft

Michael Traynor,
ALI; and Cobalt LLP

Jonathan Tse,
Cornell University

Blase Ur,
Carnegie Mellon University

Jennifer Urban,
Berkeley Law

Salil Vadhan,
Harvard University School of Engineering and Applied Sciences

Jennifer Valentino-Devries,
The Wall Street Journal

Joris Van Hoboken,
IViR, University of Amsterdam

Schmid Viola,
Technical University Darmstadt, Germany

Colette Vogele,
Without My Consent

Serge Voronov,
Duke University School of Law

Yang Wang,
Syracuse University

Richard Warner,
Chicago-Kent College of Law

Tara Whalen,
University of Ottawa

Jan Whittington,
University of Washington

Stephen Wicker,
School of Electrical and Computer Engineering, Cornell University

Lauren Willis,
Loyola Law School Los Angeles

Peter Winn,
U.S. Department of Justice

Christopher Wolf,
Future of Privacy Forum

Felix Wu,
Cardozo School of Law

Heng Xu,
The Pennsylvania State University

Malte Ziewitz,
New York University

Sebastian Zimmeck,
Columbia University

Frederik Zuiderveen Borgesius,
University of Amsterdam, Institute for Information Law

Scott Peppet, Privacy Deals

Scott Peppet, Privacy Deals

Comment by: Tanya Forsheit

PLSC 2013

Workshop draft abstract:

This paper examines a previously unexplored way in which markets may act to constrain privacy violations: through privacy-related contractual deal terms in mergers, acquisitions, financings, and other corporate transactions. The thesis is that corporate actors perceive regulatory risk related to information security and privacy, and that they seek to moderate that risk when acquiring or financing other entities by conducting privacy-related due diligence and including privacy-related terms in their deals. To the extent that such diligence and terms are effective, they not only prevent or dampen the success of privacy-negligent target firms in a given transaction but also may create a more widespread fear in startups that privacy negligence will prevent future acquisition, financial exit, or growth. This more widespread fear — that ignoring privacy concerns may mean missing out on a future exit — may have pro-privacy effects far beyond the actual threat of regulatory enforcement or prosecution. This paper explores “privacy deals” by interviewing privacy lawyers focused on M&A. It reports on the findings of those interviews, in particular the types of privacy-related deal terms already in use. It also compares different types of corporate transactions — such as acquisitions versus venture financings — to determine when in the life cycle of technology-related firms privacy begins to have transactional implications. Throughout, the goal of the paper is to shed light on the ways in which corporate transactions may or may not be privacy-protective, and to raise the legal and policy implications of such “privacy deals.”

Joris V.J. van Hoboken, Axel M. Arnbak, and Nico A.N.M. van Eijk, Obscured by Clouds or How to Address Governmental Access to Cloud Data From Abroad

Joris V.J. van Hoboken, Axel M. Arnbak, and Nico A.N.M. van Eijk, Obscured by Clouds or How to Address Governmental Access to Cloud Data From Abroad

Comment by: Carter Manny

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2276103

Workshop draft abstract:

Governments, companies and citizens have started to move their data and ICT operations into the cloud. For cloud service customers, this development leads to a decrease in overview and control over governmental access to data for law enforcement and national security purposes. This is the main conclusion of a recent study by the authors for the Dutch education and research sector, a study that was widely covered in international media.  The study analyzes the legal possibilities for U.S. governmental agencies to gain access to cloud data of Europeans in the U.S. directly, and the implications thereof for the decision-making process by potential cloud customers. It finds that in U.S. laws on national security and law enforcement wide and relatively unchecked possibilities of access exist. This is the case for data of non-U.S. persons abroad held by cloud providers conducting business in the U.S. The study further concludes that the transition towards the cloud has important negative consequences for the possibility of cloud customers to manage information confidentiality and security, as well as the privacy and protection of data of European end-users in relation to foreign governments.

The mere possibility that information in the cloud could be accessed by foreign governmental agencies has started to impact decision making by potential cloud customers. Concerns of governmental and corporate customers spur market developments such as federated and encrypted solutions as well as ‘national clouds’ that are ‘Patriot Act Proof’.   These developments consequently affect market conditions and competition, impacting U.S.-based cloud services in particular. In addition, the possibility of foreign governmental access impacts the privacy of cloud end-users and causes chilling effects with regard to cloud computing use. When data confidentiality is found vital, the situation has led to calls for regulatory action and termination of cloud contracts – such as in cases of medical data storage in electronic patient record systems and biometric data processing in relation to passports.  Furthermore, the mere possibility of access from the U.S. continues to be the subject of high-level legal and political debate in Trans-Atlantic discussions about the protection of privacy and information security in the cloud, most notably in the context of the hotly debated revision of the EU regulatory framework on data protection.

This paper will go beyond the previous study on the legal state of affairs and its impact on decision-making about the cloud, and will address the question if, and if so, how the laws in Europe could be adapted to better protect the privacy and information confidentiality interests of cloud computing end-users.

It will first address the ongoing EU data protection revision, which is seen by many as the proper instrument to ensure that foreign governmental access to cloud data meets EU standards of privacy and information security. However, the EU Data Protection Regulation clearly excludes national security regulations from its material scope (Article 2[2a] of the proposal). Nonetheless, the January 2013 draft report of the European Parliament rapporteur  introduces prior authorization of national supervisory authorities as an additional safeguard (Article 43a). Due to the exclusion of ‘national security’ from the material scope of the proposed Regulation and EU law in general, such proposals to address foreign cloud surveillance may prove to be ineffective. The complex relation between EU privacy laws and national security does, however, pose more fundamental questions, also in a strictly European context. For example, data availability and data access for law enforcement and national security purposes are increasingly interdependent in today’s information environments. The further privatization of surveillance that the transition to the cloud environment enables – meaning that personal data is collected by private entities and subsequently accessed by public authorities – may be an argument in favor of introducing additional legal safeguards on data collection and use in a regulatory initiative that is primarily targeted at industry stakeholders.

Second, the paper will assess whether improving oversight over foreign governmental access at the national and international level could be a better approach. In practice, data transfers between different state authorities in the sphere of national security seem to be mediated by a pragmatic “quid-pro-quo approach”.  The resulting exchange between governmental agencies in different countries introduces a dynamic of its own with respect to data collection, the construction of privacy safeguards in relevant laws, and the national oversight over their use in practice.  Such oversight is currently mostly focused on the relation of governmental agencies towards their own residents. This stands in contrast with the increased possibilities of gaining access to data from people abroad.

At a recent hearing in the European Parliament, EU Commissioner Viviane Reding declared that no third-country legislation overrules the European privacy regulations, and that “the International Court of Justice based in The Hague is the final arbiter on such disputes”.  Her statement will not be the final word on this matter, but does illustrate the complex political landscape associated with regulating foreign cloud surveillance. Nevertheless, the reality of today is that foreign data access is obscured by the cloud, with serious consequences for decision-making about cloud providers across the globe. The question of how the European regulator should respond becomes more relevant by the day.

Jennifer Stisa Granick, Principles for Regulation of Government Surveillance in the Age of Big Data

Jennifer Stisa Granick, Principles for Regulation of Government Surveillance in the Age of Big Data

Comment by: Ronald Lee

PLSC 2013

Workshop draft abstract:

Traditionally, regulation of government surveillance in favor of privacy and other civil liberties has focused on government action at the time of information collection. However, technological innovations have revolutionized government surveillance, due to the distribution of cheap sensor/collection technology; voluntary information disclosure/collection by individuals and civil entities; cheap storage; and increasingly powerful data processing.  Current privacy protecting policies, therefore, are increasingly irrelevant.  It is too late to reconsider the wisdom of fusion centers, government access to civilian databases, or the proliferation of drones in the hands of local law enforcement.  Thus, new principles and tools which focus on subsequent storage, processing, use or disclosure are required if the legal system is to play an effective role in ensuring surveillance capabilities are not misused or abused. Sources for these principles and tools include (1) the Privacy Act, (2) minimization and necessity case law under the Wiretap Act, (3) fair information practices, (4) Privacy by Design principles, and (5) the Ninth Circuit opinion in Comprehensive Drug Testing.  This paper will derive from these sources specific principles and practices that could help maintain individual privacy interests in the age of government’s Big Data surveillance capabilities.  My goal is to highlight the problems, the dead ends, and some possible solutions for policy makers.

Stephen B. Wicker and Stephanie Santoso, The Breakdown of a Paradigm – Cellular Regime Change and the Death of the Wiretap

Stephen B. Wicker and Stephanie Santoso, The Breakdown of a Paradigm – Cellular Regime Change and the Death of the Wiretap

Comment by: Susan Landau

PLSC 2013

Workshop draft abstract:

The coming change from a centralized cellular network to an end-to-end architecture imperils both law enforcement surveillance and the content/context model embodied in ECPA and CALEA.  This paper explores the nature of the new technology, and suggests possible models for future legislation.

Traditional cellular is a wireless add-on to a network, the public switched telephone network (PSTN), whose basic architecture is highly centralized.  The endpoints – the handsets – have virtually no control over how calls are processed.  This centralized architecture has enabled wiretaps, pen registers, and trap and trace devices, all dependent on the handset passing content and context information to the network for processing.  This centralized architecture is in sharp contrast to the “end-to-end” architecture exemplified by the Internet.  The network fabric of the Internet contains routers that generally operate only at the network, data link, and physical layer.  Higher layer activity, from transport up to the application layer, resides in the endpoints.  Barbara van Schewick [1] and others have shown that this end-to-end approach provides better performance, is more economical, and greatly spurs innovation relative to centralized architectures.  There is thus strong pressure for centralized networks to move towards an end-to-end approach.

Voice-over-IP represents an initial movement in this direction.  Though still centrally controlled, VoIP telephony promised to free voice and data traffic from having to follow the same network path.  CALEA reigned in this process by requiring a single point (usually in the form of a session border controller) that facilitates the creation of a duplicate packet stream that can be routed to law enforcement.  Law enforcement is thus able to “maintain technological capabilities commensurate with existing statutory authority” [2].  Universal Mobile Access (UMA) is a more ominous development.  UMA allows cellular handsets to offload data and voice to unlicensed WiFi channels when such channels are available.  Once again, a central point of focus – in this case, the network controller – preserves data collection capabilities.

The endpoint of the cellular technology trajectory is becoming clear.  A combination of unlicensed spectrum and open-source development will result in a commons-based cellular system with an end-to-end architecture.  This paper considers what such a cellular network might look like.  Incorporating the work of Elinor Ostrom [3] and the Open Source revolution [4], this paper explores how network routing and handset location algorithms can be developed in such a manner that wiretaps, pen registers, and trap and trace devices will be completely obsolete.  In particular, the paper considers networks that have no concept of dialing, and have no centralized location databases.  Having established a general model for a commons-based cellular system, possible solutions for limited, yet effective support for law enforcement data collection will be considered that acknowledge the nature of the new technology.  Consideration of appropriate alternatives to the content/context distinction will also be provided.


[1] Barbara van Schewick, Internet Architecture and Innovation, Cambridge: MIT Press 2010.

[2] Freeh, Louis Joseph, “Digital Telephony and Law Enforcement Access to Advanced Telecommunications Technologies and Services,” Joint Hearings on H.R. 4922 and S. 2375, 103d Cong. 7, 1994.

[3] Hess, C. and Ostrom, E. Understanding Knowledge as a Commons: From Theory to Practice, MIT Press: Boston, 2006.

[4] Glyn Moody, Rebel Code: Linux And The Open Source Revolution, Cambridge MA: Basic Books, 2002.

Stephanie K. Pell and Christopher Soghoian, Your Secret Technology’s No Secret Anymore: Will the Changing Economics of Cell Phone Surveillance Cause the Government to “Go Dark?”

Stephanie K. Pell and Christopher Soghoian, Your Secret Technology’s No Secret Anymore: Will the Changing Economics of Cell Phone Surveillance Cause the Government to “Go Dark?”

Comment by: Susan Landau

PLSC 2013

Workshop draft abstract:

Since the mid-1990s, U.S. law enforcement agencies have used a sophisticated surveillance technology that exploits security flaws in cell phone networks to locate and monitor mobile devices covertly, without requiring assistance from wireless carriers. This Article explores the serious privacy and security issues associated with the American government’s continued exploitation of cell phone network security flaws. It argues that legislative and industry action is needed if only to avoid a single ironic result: the government may unintentionally compromise its ability to conduct standard, carrier-assisted electronic surveillance. Without reform, it is likely that mobile device and software vendors will adopt end-to-end encryption to provide their customers with secure communications, causing wireless communications to go dark to law enforcement’s gaze. Moreover, the U.S. government’s reflexive obfuscation of this surveillance practice facilitates additional harms: enabling foreign espionage and domestic industrial espionage on U.S. soil and encouraging ubiquitous monitoring by private parties.

The U.S. government monitors mobile phones via cell site simulator(s) (CSS) that functionally mimic cell phone towers. CSS exploit a fundamental security flaw in all cellular devices: they cannot authenticate the origin of signals but merely connect to any nearby source whose signal purports to be from a tower operated by a licensed provider. Once a phone erroneously connects to a CSS, its location can be determined, and calls, text messages and data can be intercepted, recorded, redirected, manipulated or blocked.

Law enforcement, intelligence agencies, and the military have presumably used CSS to their advantage: when a target’s phone number is unknown or a mobile device has no GPS chip, they can monitor every phone in a geographic area using briefcase-sized CSS hardware. Moreover, when the government cannot obtain a phone company’s assistance, such as in operations abroad, it can use CSS to conduct surveillance without the carrier’s knowledge.

By intercepting signals directly, CSS circumvent the limited but useful privacy protections offered by commercial third parties. While privacy scholarship and recent Supreme Court jurisprudence often denounce the third party doctrine, this Article argues, counter intuitively, that third party control of data can protect privacy. When compared with warrantless, unmediated government surveillance, third parties can act as gatekeepers with the capacity to challenge government overreach, particularly when market incentives and customer interests align with privacy concerns. These intermediaries can even invoke judicial scrutiny of government surveillance practices. Their efforts can create opportunities for courts to develop new Fourth Amendment doctrine while scrutinizing surveillance practices, such as with the concurring opinions in U.S. v. Jones, and for Congress to regulate these practices by statute.

To date, legal scholarship has failed to consider the effects of CSS both within and outside of the domestic law enforcement context. Indeed, the privacy and security risks associated with CSS cannot be cabined by the Fourth Amendment or statute, for the problems extend beyond America’s borders. Western democracies no longer have a monopoly over access to CSS technology. There is a robust market in CSS technologies, and several vendors around the world sell to any government or individual who can pay their price.

Surveillance is also increasingly ubiquitous. Researchers have created low-cost, easy to construct CSS. For under $2,500, tech-savvy criminals can purchase offthe- shelf equipment to build their own CSS. Less robust “passive” interception of nearby calls is also possible by modifying a widely available $20 cell phone. Wiretapping is no longer the exclusive province of governments, but is equally available to private investigators, identity thieves, and industrial spies.

Despite this significant technological change, the U.S. government continues to shield information about its own use of CSS, ostensibly to protect such use in the future. This opacity comes at a cost: treating CSS as solely a “sources and methods” protection issue suppresses public debate and education about the security vulnerabilities in our cell phone networks. That trade-off might have been reasonable when access to CSS was privileged and expensive, but the rapid democratization of surveillance is changing the balance of privacy and security equities.

U.S. government use of CSS accentuates the fundamental tension between government surveillance capabilities and the security of networks. When Congress has grappled with this conflict in the past, it gave priority to surveillance capabilities. Today, however, the same threat environment that informs ongoing cyber security legislative efforts mandates that any solution crafted to cabin the harms of CSS recognize the primacy of network security.

Roger Allan Ford, Unilateral Invasions of Privacy

Roger Allan Ford, Unilateral Invasions of Privacy

Comment by: Avner Levin

PLSC 2013

Workshop draft abstract:

Most people seem to agree that individuals have too little privacy, and most proposals to address that problem focus on ways to give those users more control over how information about them is used. Yet in nearly all cases, information subjects are not the parties who make decisions about how information is collected, used, and disseminated; instead, third parties make unilateral decisions to collect, use, and disseminate information about others. These potential privacy invaders, acting without input from information subjects, are the parties to whom proposals to protect privacy must be directed.

This essay develops a probabilistic theory of privacy invasions rooted in the incentives of potential privacy invaders. It first briefly describes the different kinds of information flows that can result in losses of privacy and the ways in which third parties can unilaterally decide that such information flows will occur. It then analyzes the costs and benefits faced by these potential privacy invaders, arguing that these costs and benefits explain what makes some invasions of privacy more likely than others. Potential privacy invaders are more likely to act when their own costs and benefits make an information flow worthwhile, regardless of the costs and benefits to society. And potential privacy invaders are quite sensitive to changes in these costs and benefits, unlike information subjects, for whom transaction costs can overwhelm incentives to make information more or less private.

This has important consequences for the design of effective privacy regulations. Effective regulations are those that help match the costs and benefits faced by a potential privacy invader with the costs and benefits to society of a given information flow. Law should help do this by raising or lowering the costs of a privacy invasion, but only after taking account of other costs (from technology and social norms) and benefits faced by the potential privacy invader.

Kirsten Martin, An empirical study of factors driving privacy expectations online

Kirsten Martin, An empirical study of factors driving privacy expectations online

Comment by: Annie Anton

PLSC 2013

Workshop draft abstract:

Recent work suggests that conforming to privacy notices online is neither necessary nor sufficient to meeting privacy expectations of users; however, a direct comparison between the two types of privacy judgments has not been performed.  In order to examine whether and how judgments about privacy expectations differ from judgments about privacy notice compliance, four factorial vignette studies were conducted covering targeted advertising and tracking information online for both the degree scenarios were judged to meet privacy expectations and judged to comply with privacy notices.  The study tests the hypotheses that (a) individuals hold different privacy expectations based on the context of their online activity and (b) notice and consent varies in tis effectiveness in addressing online privacy expectations across different contexts.  The general goal of the larger project is to better understand privacy expectations across contexts online leveraging a contextual approach to privacy.

The initial findings through pilot studies have identified factors – such as using data for friends, the time the data is stored, the type of information captured, etc – that vary in importance to privacy judgments depending on the context online.  In addition, the analysis around privacy notices suggests that users have an expectations premium whereby a given scenario meets the privacy expectations to a lesser extent than the scenario is judged to comply with the privacy notice.  In particular, using and tracking click information, using an individual’s name, and using the information for advertising is judged to comply with the privacy notice yet do not meet privacy expectations.

In this paper for PLSC, judgments about privacy expectations online will be compared to judgments about compliance to privacy notices along three dimensions:  the judgments themselves, the factors that contribute to the judgments, and how the judgments are made.  The findings will (1) identify important online contexts with similar privacy expectations (e.g., gaming, shopping, socializing, blogging, researching, etc), (2) prioritize the role of notice and consent in addressing privacy expectations within different contexts, and (3) identify the factors and their relative importance in developing privacy expectations for specific contexts online.

The findings have implications to how firms should attempt to meet the privacy expectations of users.  When privacy notices are found to be insufficient to meeting privacy expectations, individuals have attempted to pull out of this information exchange and obfuscate their behavior using tools such as CacheCloak, donottrack.us, Bit Torrent Hydra, TOR,  and TrackMeNot, which work to allow users to maintain their privacy expectations regardless of the privacy policy of a website.  Understanding how, if at all, judgments about privacy notices are related to privacy expectations should help firms avoid unnecessary and unintentional privacy violations caused by an over reliance on privacy notices.

Larry Rosenthal, Binary Searches and the Central Idea of the Fourth Amendment

Larry Rosenthal, Binary Searches and the Central Idea of the Fourth Amendment

Comment by: Marc Blitz

PLSC 2013

Workshop draft abstract:

Many scholars and judicial decisions have identified privacy as the central value at the root of the Fourth Amendment’s prohibition on unreasonable search and seizure.  Yet, the conception of Fourth Amendment privacy is deeply contested.  Fourth Amendment jurisprudence has oscillated between competing conceptions of privacy.  The libertarian conception argues that the Fourth Amendment works to identify a private domain free from unwarranted governmental intrusion, while the pragmatic conception sees privacy as a function of an effort to balance liberty against law enforcement interests.

Few cases force such a clear choice between these competing conceptions as Florida v. Jardines, in which the United States Supreme Court will decide whether the use of a drug-detection dog to determine whether a home contains contraband is considered a “search,” regulated for reasonableness under the Fourth Amendment.  The use of a properly trained drug-detection dog is often characterized as a “binary search” because it discloses nothing other than the presence of absence of contraband. Deciding whether a binary search should be regarded as infringing a legitimate expectation of privacy is no small matter.  Indeed, to decide Jardines, one is forced to choose between the libertarian and pragmatic conceptions of Fourth Amendment privacy.  From a libertarian perspective, there is no stronger candidate for a private domain free from official scrutiny than the home.  Yet, there are powerful pragmatic arguments against limiting the use of binary search techniques.  A binary search discloses nothing of interest about the innocent; it reveals only that an individual has utilized the privacy of the home to break the law.  The decision whether to treat a binary search as infringing an expectation of privacy that we should regard as legitimate accordingly reveals a great deal about our conceptions of privacy as a legal concept.  This paper will explore what the binary search in general, and the Jardines decision in particular, tell us about the character of Fourth Amendment privacy.