Archives

Judith Rauhofer, Protecting their own: fundamental rights implications for a EU data sovereignty in the cloud

Judith Rauhofer, Protecting their own: fundamental rights implications for a EU data sovereignty in the cloud

Comment by: Edward McNicholas

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/results.cfm?RequestTimeout=50000000

Workshop draft abstract:

In recent years there has been a significant increase in the systematic access of law enforcement and security agencies to personal data held by private sector entities. This is true both at EU level and at the level of individual EU member states. In 2011, the European Commission adopted a proposal for an EU Passenger Name Record (PNR) Directive  that would oblige air carriers to provide data on all passengers entering or departing from the EU (PNR data) to national passenger information units which would store that data, analyse it and transmit the result of the analysis to national law enforcement authorities. In 2012, the Commission published plans to allow the use of the EURODAC database, which collects fingerprints of asylum seekers for the purpose of intra EU border control for the purpose of prevention, detection and investigation of terrorist offences and other serious criminal offences . In the UK, the London Metropolitan Police is routinely granted access to the location and other data generated by users of Transport for London’s Oyster Card . Although the sharing of these types of data is widely criticized, its use by public bodies is ultimately subject to the fundamental rights protection provided by the national constitutions of the EU member states , the right to privacy set out in Article 8 of the European Convention on Human Rights and Articles 7 and 8 of the Charter of Fundamental Rights of the European Union. This means that any interference with those rights by public authorities must be in accordance with the law, necessary in a democratic society and proportionate. While the enforcement of these rights can be both costly and time-consuming, they provide a backstop to unlawful state interference that forms an essential part of a European culture of fundamental rights protection for its citizens’ information privacy. Many EU citizens are therefore particularly sensitive to any threat to their privacy that is posed by institutions that may not be subject to that fundamental rights framework. This includes, in particular, the law enforcement authorities of non-EU countries. Requests from, in particular, US law enforcement and security services, for access to EU citizens’ personal data have therefore been met with widespread resistance from individual citizens, civil society organisations and regulators. In the past transatlantic conflicts have erupted inter alia over the transfer of EU citizens’ PNR data to the Department of Homeland Security  and the transfer of SWIFT data to a variety of US law enforcement bodies . More recently, concerns have been raised about the possibility that US law enforcement and security services may be allowed warrantless access to EU citizens” personal data held by US-based cloud computing providers on the basis of the PATRIOT ACT or FISAA. Attacks on these laws from within the US have largely focused on whether they might also be used to access US citizens’ personal information, despite the protections provided by the Fourth Amendment. However, their impact on the rights of EU citizens has only recently made headlines following the publication of a study on cybercrime and cloud computing commissioned by the European Commission . The existence of these rights of access raises questions about new rules governing the transfer of personal data from the EU to non-EU countries that are currently discussed in the context of the reform of the EU data protection regime. The new transfer rules, designed to facilitate cross-border data flows, were included in the reform proposals in response to claims that the current regime is too complex and constitutes a barrier to growth of the global digital economy. This paper will analyse whether, in light of the threats to EU data identified above, the proposed new rules are in fact compliant with the fundamental rights framework in place in the EU and its member state or whether additional safeguards are needed to ensure that EU sovereignty over its citizens’ privacy and personal data is protected.

Derek Bambauer, Exposed

Derek Bambauer, Exposed

Comment by: Collete Vogele

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2315583

Workshop draft abstract:

Ubiquitous recording capabilities via smartphones and Internet distribution have given rise to a disturbing trend: the unconsented distribution of images and videos that capture people nude or engaged in intimate activity. Current law may permit recourse against those who initially distribute this content, but immunity for intermediaries under 47 U.S.C. 230 generally permits the material to remain in circulation. This Article proposes a solution to this problem grounded in intellectual property doctrine. It first describes the problem, and advances both utilitarian and deontological theories of harm to justify regulatory intervention. Next, it proposes a civil IP-based regime (sounding either in copyright’s moral rights doctrine, or trademark law) that provides for injunctive and monetary relief against initial distributors, and that establishes a notice-and-takedown system for intermediaries, similar to that of the Digital Millennium Copyright Act. Written consent by subjects of the photos / videos would operate as an absolute defense, as would newsworthy use or distribution. Lastly, the statute examines potential doctrinal difficulties under the Copyright Act and the First Amendment, and analyzes why the proposal traverses both concerns.

Steven M. Bellovin, Matt Blaze, Sandy Clark, Susan Landau, Lawful Hacking: Using Existing Vulnerabilities for Wiretapping on the Internet

Steven M. Bellovin, Matt Blaze, Sandy Clark, Susan Landau, Lawful Hacking:  Using Existing Vulnerabilities for Wiretapping on the Internet

Comment by: Anne McKenna

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2312107

Workshop draft abstract:

For years, legal wiretapping was straightforward: the officer doing the intercept connected a tape recorder or the like to a single pair of wires. By the 1990s, though, the changing structure of telecommunications—there was no longer just “Ma Bell”  to talk to—and new technologies such as ISDN and cellular telephony made life more complicated.  Simple technologies would no longer suffice. In response, Congress passed the Communications Assistance for Law Enforcement Act (CALEA)5, which mandated a standardized lawful intercept interface on all local phone switches.  Technology has continued to progress, and in the face of new forms of communication—Skype, voice chat during multiplayer online games, many forms of instant messaging, etc.—law enforcement is again experiencing problems. The FBI has called this “Going Dark”:6 their loss of access to suspects’ communication.  According to news reports, they want changes to the wiretap laws to require a CALEA-­‐like interface in Internet software.7

 

CALEA, though, has its own issues: it is complex software specifically intended to create a security hole—eavesdropping capability—in the already-­‐complex environment of a phone switch. Warnings of danger have indeed come to pass, most famously in the so-­‐called “Athens Affair”, where someone hacked into a Vodaphone Greece switch and used the built-­‐in lawful intercept mechanism to listen to the cell phone calls of high Greek officials, up to and including the Prime Minister.8 In an earlier work, we showed why extending CALEA to the Internet would create very serious problems, including very specifically creating many new security problems.

We proposed an alternative: legalized hacking, relying on the very large store of unintentional, naturally occurring existing vulnerabilities in software to obtain access to communications.  Relying on vulnerabilities and hacking, though, poses a large set of legal and policy questions.  Among these are:

  • Will it create disincentives to patching?
  • Will there be a negative effect on innovation? (Lessons from the so-­‐called
  • “Crypto Wars” of the 1990s are instructive here.)
  • Will law enforcement’s participation in vulnerabilities purchases skew the market?
  • Should law enforcement even be participating in a market where many of the sellers and other buyers are themselves criminals?
  • What happens if these tools are captured and repurposed by miscreants?
  • How does the Fourth Amendment affect use of these tools? In particular,  since they can grant full access to a computer and not just to communications, should there be statutory restrictions similar to those in the Wiretap Act?10
  • Is the probability of success from such an approach too low for it to be useful?

There are also logistical and organizational concerns. Local and even state law enforcement agencies are unlikely to have the technical sophistication to develop exploits and the legally acceptable tools to use them. This in turn implies a greater role for the FBI and its labs. Is this intrusion of Federal authorities into local policing acceptable?  Will this turn the FBI more into an intelligence agency?

 

1 Steven M. Bellovin is a professor of computer science at Columbia University.

2 Matt Blaze is an associate professor of computer science at the University of Pennsylvania.

3 Sandy Clark is a Ph.D. student in computer science at the University of Pennsylvania.

4 Susan Landau is a Guggenheim Fellow.

5 Pub. L. No. 103-­‐414, 108 Stat. 4279, codified at 47 USC 1001-­‐1010.

6 Valerie Caproni, General Counsel of the FBI, Statement Before the House Judiciary Committee, Subcommittee on Crime, Terrorism, and Homeland Security, February

17, 2011, available at http://www.fbi.gov/news/testimony/going-dark-lawful-electronic-surveillance-in-the-face-of-new-technologies

7 Declan McCullagh, “’Dark’ motive: FBI seeks signs of carrier roadblocks to

surveillance”, CNET News, Nov. 5, 2012, available at http://news.cnet.com/8301-13578_3-57545353-38/dark-motive-fbi-seeks-signs-of-carrier-roadblocks-to-surveillance/

8 Vassilis Prevelakis and Diomidis Spinellis, The Athens Affair, IEEE Spectrum, July 2007.

9 Steven M. Bellovin, Matt Blaze, Sandy Clark, and Susan Landau, “Going Bright: Wiretapping without Weakening Communications Infrastructure”, IEEE Security & Privacy”, Jan/Feb 201

10 In particular, see the conditions that must be satisfied in 18 USC 2518(1)(c) and the enumeration of offenses in 18 USC 2516.

Ariel Porat and Lior Jacob Strahilevitz, Personalizing Default Rules and Disclosure with Big Data

Ariel Porat and Lior Jacob Strahilevitz, Personalizing Default Rules and Disclosure with Big Data

Comment by: Lauren Willis

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2217064

Workshop draft abstract:

The rise of Big Data has become one of the most important recent economic developments in the industrialized world, while simultaneously posing vexing privacy issues for policymakers. While the use of Big Data to help firms sort consumers (e.g., for marketing or risk-pricing purposes) has been the subject of sustained discussion, scholars have paid little attention to governmental uses of Big Data for purposes unrelated to law enforcement and national security. In this paper, we show how Big Data might transform adjudication, disclosure, and the tailoring of contractual and testamentary default provisions.

The paper makes three key contributions. First, it uses the psychology and computer science literatures to show how Big Data tools classify individuals based on “Big Five” personality metrics, which in turn are predictive of individuals’ future consumption choices, risk-preferences, technology adoption, health attributes, and voting behavior. For example, the paper discusses new research showing that by systematically analyzing data from individuals’ smartphones, researchers can identify particular phone users as particularly extrovereted, conscientious, open to new experiences, or neurotic. Smartphones, not eyes, turn out to be the true windows into our souls. Second, it shows how Big Data tools can match individuals who are extremely similar in terms of their behavioral profiles and, by providing a subset of “guinea pigs” with a great deal of time and information to make choices, it can extrapolate ideal default rules for huge swaths of the population. Big Data might even be used to promote privacy to some degree, by offering pro-privacy defaults to consumers whose past behavior suggests a strong preference for privacy and pro-sharing defaults to consumers whose past behavior indicates little interest in safeguarding personal information. Third, the paper is the first to show that the influential critiques of disclosure regimes (including FIPs-style “notice and choice” provisions) are at bottom critiques of impersonal disclosure. A legal regime that relies on Big Data to determine which relevant information about risks, side-effects, and other potentially problematic product attributes should be disclosed to individual consumers has the potential to improve the efficacy of disclosure strategies more generally. The paper also confronts a number of important objections to personalization in these various contexts, including concerns about cross-subsidies, government discrimination, and personalization in a world where human preferences and behaviors change over time.

The paper seeks to show privacy scholars and advocates precisely what the stakes will be in the looming fight over Big Data. We seek to demonstrate that Big Data battles are not merely fights about privacy versus profits, but implicate a host of unrecognized social interests as well.

Bryan Choi, The Tax Loophole to Constitutional Privacy

Bryan Choi, The Tax Loophole to Constitutional Privacy

Comment by: Derek Bambauer

PLSC 2013

Workshop draft abstract:

Even as the third party doctrine has come under sharp criticism in Fourth Amendment jurisprudence, an eerily similar workaround has been developing under the Fifth Amendment. The third party doctrine grew out of a tax enforcement case that held that a taxpayer has no reasonable expectation of privacy in financial records held by a third party such as a bank. That rule was later generalized to phone records and any other information held by a third party.

Likewise, a recent set of tax enforcement cases in the courts of appeals (5th, 7th, 9th) has held that taxpayers are not entitled to invoke the Fifth Amendment privilege against self-incrimination in order to withhold statutorily required records of offshore bank accounts. In essence, the reasoning adopted by those courts is that, if the records are required to be kept by the defendant, then the government already knows they exist and the compelled disclosure of those records is not incriminating — unless their very existence would indicate criminal activity. The fact that the contents of those records might be incriminating is irrelevant.

This case study provides an opportunity to reevaluate the controversial “required records” doctrine, as well as to revisit the long-running scholarly debate regarding the overlapping roles of the Fourth and Fifth Amendments in safeguarding individual privacy from governmental intrusion. In isolation, the tax enforcement cases seem innocuous enough. Yet, In future cases, the required records doctrine could easily be extended to phone records and other information of governmental interest, in the same manner as the third party doctrine. If we think the third party doctrine has gone too far, we should be wary of retracing its steps under a different guise.

Lilian Edwards and Edina Harbinja, Post mortem privacy: a comparative perspective

Lilian Edwards and Edina Harbinja, Post mortem privacy: a comparative perspective

Comment by: Deven Desai

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2267388

Workshop draft abstract:

Privacy is the hot topic of the moment in both the EU and US, yet debates around legal intervention to protect privacy have so far in the main been restricted to protecting the living data subject.  Common law jurisdictions such as the US and England and Wales have a long tradition of restricting rights seen as personal to the deceased to enforcement by them during lifetime, hence eg restrictions on bringing libel, personality and emotional distress claims after death by representatives. Civilian legal systems on the other hand are traditionally much more respectful of the interests of the deceased, especially the creator, in preserving their reputation and the integrity of their creations after death. EU data protection laws are generally interpreted not to protect the personal data of the dead,  though a  few countries such as Estonia and Bulgaria have ventured into this territory, and the proposed reforms of the draft DP Regulation make it more not less likely that protection will be restricted to the living.

In the meantime however a parallel slow burning debate has ignited concerning preservation and transmission of emails, social media profiles and other so-called digital assets after death. These issues are largely ruled currently by the law of contract as dictated by standard form service provider terms of service, but judicial and legislative interest is emerging on both sides of the Atlantic. As in paradigm cases such as In re Ellsworth and In re Facebook, digital assets on death disputes  tend to display a patent conflict between the wishes – and sometimes the economic interests of the living – and the privacy rights of the dead.  This paper argues that the debate over the commodification and rights of ownership and control over digital assets on death has the potential to illuminate and ignite a debate over whether privacy interests do and indeed should end or wither on death. The debate will be illustrated with comparisons from the law of personality rights, moral rights, defamation and organ donation.


No. 2005-296, 651-DE (Mich. Prob. Ct. 2005). See discussion in Baldas T. “Slain Soldier’s E-Mail Spurs Legal Debate: Ownership of Deceased’s Messages at Crux of Issue”, 27 Nat’l L.J. 10, 10 (2005)

In re Request for order requiring Facebook, inc. to produce documents and things, Case No: C 12-80171 LHK (PSG), 9/20/201 .

Babak Siavoshy, Fourth Amendment Regulation of Information Processing

Babak Siavoshy, Fourth Amendment Regulation of Information Processing

Comment by: Stephen Henderson

PLSC 2013

Workshop draft abstract:

When (if ever) should the processing, analysis, or manipulation of evidence—rather than its mere collection—by the government trigger the Fourth Amendment?  This essay addresses some of the difficult line-drawing problems that arise from this question.

The Fourth Amendment protects the people from unreasonable government searches and seizures of persons, houses, papers, and effects.  Increasingly, however, government entities gather information not by rummaging through peoples’ things, but rather by using technology to process, analyze, or manipulate evidence or information that is already in the government’s hands or otherwise exposed.  For instance, the government may uncover information about a person by analyzing DNA he “abandoned” on the sidewalk or a discarded coffee cup; it might learn what happens in his house by processing the heat signatures emanating from its walls; or it might learn his habits by stringing together the pattern of his “public” movement using thousands of data points from cameras, government weather satellites, or automatic license plate readers. In each of these cases, the physical form of what is collected—DNA, heat, or visual information exposed to the public—is either exposed or already in the government’s hands.  It is the government’s use of technology to process, analyze, and enhance what is collected that makes the evidence useful, and that raises potential privacy concerns.

One response to these developments—perhaps representing the conventional wisdom—is that there are few, if any, constitutional limits on the government’s ability to manipulate evidence it could otherwise legally obtain.  Advocates of this position correctly note that judicially imposed limitations on information processing create difficult line drawing problems (how do we distinguish between acceptable information processing and unacceptable information processing?) and risk tying the hands of law enforcement by arbitrarily restricting the use of technology in investigations.  Accordingly, the conventional wisdom makes a strong argument that the government’s use of technology to manipulate, process, or analyze evidence—where there is no obvious collection problem—does not and should not trigger the Fourth Amendment.

This essay argues that the conventional wisdom on information processing under the Fourth Amendment is both misplaced and overstated.  It is misplaced because it adopts a wooden construction and application of the Fourth Amendment (an otherwise flexible provision) and one that risks significantly undermining the Amendment’s effectiveness and purpose, particularly in light of advancements in technology that allow the government to get the information it wants without engaging in conduct that looks like a Fourth Amendment search or seizure.  The conventional wisdom on information processing is also overstated, because it assumes that courts have hereto been unwilling to impose constitutional limitations on information processing conduct by the government.  In fact, information the issue is not new to the courts.  The judges and justices who shape Fourth Amendment law have grappled with what is essentially technologically-enhanced information processing conduct in cases as varied as Kyllo v. United States, Skinner v. Railway Executives Labor Association, Walter v. United States, United States v. Jones, and even Katz v. United States.  An overview of these and other cases suggests, first, that courts are willing to impose Fourth Amendment limitations on some information-processing conduct—or at the very least, that courts acknowledge that such conduct raises a Fourth Amendment question.  Second, it suggests a number of different solutions to the legitimate line drawing and other concerns raised by advocates of the view that information processing should not, by itself, trigger the Fourth Amendment. While there are no perfect solutions, the essay suggest a theoretical framework and a path forward for evaluating the Fourth Amendment implications of the increasing use of technologically-enhanced information processing by the government.

Andrea M. Matwyshyn, Talking Data

Andrea M. Matwyshyn, Talking Data

Comment by: Andrew Selbst

PLSC 2013

Workshop draft abstract:

In the wake of Sorrell v. IMS Health, open questions remain regarding the limitations on privacy regulation imposed by the First Amendment.    A conceptual classification problem that is simultaneously also visible in other bodies of law has crept into the intersection of privacy and the First Amendment:  confusion over when (or whether) a data aggregator’s code (and its attached information) is a type of expressive, socially-embedded act of communication or a type of free-standing communicative yet regulable “product.”   This article argues that although the statute at issue in Sorrell failed First Amendment scrutiny, privacy regulation which restricts onward transfer of databases of consumer information – even transfers of anonymized data – if carefully crafted, can pass First Amendment scrutiny.    Through blending doctrinal First Amendment principles with the fundamental tenets of human subjects research protection imposed by Congress and the Department of Health and Human Services,  this article explains the doctrinal limits of the First Amendment on future consumer privacy laws and offers an example of a possible First Amendment-sensitive approach to protecting consumer privacy in commercial databases.

David Gray & Danielle Citron, A Technology-Centered Approach to Quantitative Privacy

David Gray & Danielle Citron, A Technology-Centered Approach to Quantitative Privacy

Comment by: Harry Surden

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2129439

Abstract:

We are at the cusp of a historic shift in our conceptions of the Fourth Amendment driven by dramatic advances in technologies that continuously track and aggregate information about our daily activities.  The Fourth Amendment tipping point was marked this term by United States v. Jones.  There, law enforcement officers used a GPS device attached to Jones’s car to follow his movements for four weeks.  Although Jones was resolved on narrow grounds, five justices signed concurring opinions defending a revolutionary proposition: that citizens have Fourth Amendment interests in substantial quantities of information about their public or shared activities, even if they lack a reasonable expectation of privacy in each of the constitutive particulars. This quantitative approach to the Fourth Amendment has since been the focus of considerable debate.  Among the most compelling challenges are identifying its Fourth Amendment pedigree, describing a workable test for deciding how much information is enough to trigger Fourth Amendment interests, and explaining the doctrinal consequences.  This Article takes up these challenges.

Our analysis and proposal draw upon insights from information privacy law.  Although information privacy law and Fourth Amendment jurisprudence share a fundamental interest in protecting privacy interests, these conversations have been treated as theoretically and practically discrete.  This Article ends that isolation and the mutual exceptionalism that it implies.  As information privacy scholarship suggests, technology can permit government to know us in unprecedented and totalizing ways at great cost to personal development and democratic institutions.  We argue that these concerns about panoptic surveillance lie at the heart of the Fourth Amendment as well.  We therefore propose a technology-centered approach to measuring and protecting Fourth Amendment interests in quantitative privacy.  As opposed to proposals for case-by-case assessments of information “mosaics,” which have so far dominated the debate, we argue that government access to technologies capable of facilitating broad programs of continuous and indiscriminate monitoring should be subject to the same Fourth Amendment limitations applied to physical searches.