Archives

Susan Freiwald, Fourth Amendment Protection for Stored Cell Site Location Information

Susan Freiwald, Fourth Amendment Protection for Stored Cell Site Location Information

Comment by: Katherine Strandburg

PLSC 2010

Workshop draft abstract:

Lower courts have split on whether agents need to obtain a warrant prior to obtaining real-time or prospective information from cell phone service providers about the cell phone towers used by a targeted subscriber.  Such Cell Site Location Information (“CSLI”) may divulge detailed information about a person’s whereabouts and travels throughout the day, because cell phones may register frequently with nearby cell towers to direct incoming and outgoing calls, text and data.  While courts have analogized between real-time access to CSLI and electronic surveillance, only recently did a Magistrate Judge in Pennsylvania (recognize that access to historical CSLI poses the same risk of abuse as real-time access and requires the same meaningful judicial oversight to satisfy the Fourth Amendment.  (534 F. Supp. 2d 585) She denied, in an opinion joined by three other magistrate judges, the government’s request for historical CSLI without a warrant based on probable cause, and her order was upheld, without opinion, by the District Court. (2008 WL 4191511)  My paper would elaborate on the arguments that I made as an amicus curiae in two briefs: One in the District Court of Pennsylvania in favor of affirming the Magistrate Judge’s decision and one in the Third Circuit in favor of affirming the District Court’s order.  Briefly, the distinction between historical data and real time or prospective data is practically arbitrary, because agents may regularly request records of immediately past use and thereby use “historical” orders effectively to obtain real-time information.  As a substantive matter, methods to obtain historical CSLI may be just as hidden, indiscriminate, and effectively continuous (in that they covers a period of time) as the methods used to Wiretap. CSLI should be subject to a reasonable expectation of privacy (your recent great work supports this) and its acquisition is quite intrusive.  (I have argued elsewhere that The Supreme Court and lower courts have found that the Fourth Amendment requires the highest level of judicial oversight when the government uses a surveillance method that is: hidden, continuous, indiscriminate and intrusive.)  Doctrinally, the beeper cases do not shed much light on the question, but to the extent they do, they support requiring at least a warrant.  The same may be said for the Miller case, which I argue does not support the broad “third-party rule” that is claimed for it and does not support access to historical CSLI on less than a warrant either.  Depending on what happens in the Third Circuit and when, the paper can either discuss the oral argument or the actual decision and assess it against my own views of what the law is and should be.

Mark MacCarthy, New Directions in Privacy: Disclosure, Unfairness, and Externalities

Mark MacCarthy, New Directions in Privacy: Disclosure, Unfairness, and Externalities

Comment by: Lauren Willis

PLSC 2010

Workshop draft abstract:

Several recent developments underscore a return of public concerns about access to personal information by businesses and its possible misuse. The Administration is conducting an extensive interagency review of commercial privacy, Congress is considering legislation on online behavioral advertising and in November an international conference of government officials will likely approve a global standard on privacy protection.

But what’s the best way to protect privacy? David Vladeck, the new head of consumer protection for the Federal Trade Commission, has said he is dissatisfied with the existing policy frameworks for thinking about the issue. He’s right. The traditional framework of fair information practices is severely limited by excessive reliance on informed consent.  Restrictions on disclosure are impractical in a digital world where information collection is ubiquitous, where apparently anonymous or de-identified information can be associated with a specific person and where one person’s decision to share information can adversely affect others who choose to remain silent.  The alternative “harm” framework, however, seems to allow all sorts of privacy violations except when specific, tangible harm results.  If an online marketer secretly tracks you on the Internet and serves you ads based on which web sites you visited, well, where’s the harm?  How are you hurt by getting targeted ads instead of generic ones?  And yet people feel that secret tracking is the essence of a privacy violation.

The traditional harms approach is clearly too limited.  It defines the notion of harm so narrowly that privacy itself is no longer at stake.  And yet its focus on outcomes and substantive protection rather than process is a step in the right direction.

Part I of this paper describes the limitations on the informed consent model, suggesting that informed consent is neither necessary nor sufficient for a legitimate information practice. Part II explores the idea of negative privacy externalities, illustrating several ways in which data can be leaky.  It also discusses the ways in which the indirect disclosure of information can harm individuals through invidious discrimination, inefficient product variety restrictions on access, and price discrimination. Part III outlines the unfairness model, explores the three-part test for unfairness under the Federal Trade Commission Act, and compares the model to similar privacy frameworks that have been proposed as additions to (or replacements for) the informed consent model.  Part IV explores how to apply the unfairness framework to some current privacy issues involving online behavioral advertising and social networks.

Matthew Bodie, Employee Privacy and Autonomy, in Restatement Third of Employment Law

Matthew Bodie, Employee Privacy and Autonomy, in Restatement Third of Employment Law

Comment by: Tanya Forsheit

PLSC 2010

Workshop draft abstract:

The chapter, part of the American Law Institute’s first Restatement of employment law, endeavors to restate the common law of worker privacy and autonomy.  It describes employees’ privacy interests in information, physical and electronic locations, and confidentiality, as well as protection against retaliation for refusals to accede to privacy violations.  The autonomy provisions protect employees against adverse actions based on important off-duty activities that have no effect on workplace responsibilities.  The draft is at this point the work of the reporter; it has not been approved by the ALI Council or membership.  I am very much looking forward to your comments.

Aleecia McDonald & Lorrie Cranor, An Empirical Study of How People Perceive Online Behavioral Advertising

Aleecia McDonald & Lorrie Cranor, An Empirical Study of How People Perceive Online Behavioral Advertising

Comment by: Joseph Turow

PLSC 2010

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1989092

Workshop draft abstract:

We performed a series of in-depth qualitative interviews with 14 subjects who answered advertisements to participate in a university study about Internet advertising. Subjects were not informed this study had to do with behavioral advertising privacy, but raised privacy concerns on their own unprompted. We asked, “what are the best and worst things about Internet advertising?” and “what do you think about Internet advertising?” Participants held a wide range of views ranging from enthusiasm about ads that inform them of new products and discounts they would not otherwise know about, to resignation that ads are “a fact of life,” to resentment of ads that they find “insulting.” Many participants raised privacy issues in the first few minutes of discussion without any prompting about privacy. We discovered that many participants have a poor understanding of how Internet advertising works, do not understand the use of first-party cookies, let alone third-party cookies, did not realize that behavioral advertising already takes place, believe that their actions online are completely anonymous unless they are logged into a website, and believe that there are legal protections that prohibit companies from sharing information they collect online. We found that participants have substantial confusion about the results of the actions they take within their browsers, do not understand the technology they work with now, and clear cookies as much out of a notion of hygiene as for privacy. When we asked participants to read the NAI opt-out cookie description, only one understood the text. One participant expressed concern the NAI opt-out program was actually a scam to gather additional personal information. No participants had heard of opt-out cookies or flash cookies. We also found divergent views on what constitutes advertising. Industry self-regulation guidelines assume consumers can distinguish third-party widgets from first-party content, and further assume that consumers understand data flows to third-party advertisers. Instead, we find some people are not even aware of when they are being advertised to, let alone aware of what data is collected or how it is used.

Deirdre Mulligan & Colin Koopman, A Multi-Dimensional Analysis of Theories of Privacy

Deirdre Mulligan & Colin Koopman, A Multi-Dimensional Analysis of Theories of Privacy

Comment by: Harry Surden

PLSC 2010

Workshop draft abstract:

The concept of privacy, despite its centrality for contemporary liberal democratic culture, is remarkably ill-understood.  We face today an almost dizzying array of diverging and conflicting theorizations, conceptualizations, diagnoses, and analyses of privacy.

These multiple senses of privacy provoke uncertainty about the concept and attendant charges of ambiguity and vagueness.  Despite the uncertainty of privacy being a cause for concern, we argue here that the conceptual plurality of privacy with which we are faced today positively answers to the dynamic and diverse functions that privacy performs in our culture.  In order to appreciate the positive benefits of privacy’s plurality, however, we need to undertake inquiries into the various ways in which our conceptions of privacy differ from one another.  Our primary claim is that the multiple dimensions along which concepts of privacy vary demand careful scrutiny and evaluation.

Short of that, we may too easily find ourselves overwhelmed with an abundance of claims concerning privacy, and this abundance may induce a dizzying rather than a dynamic uncertainty.  The article proceeds as follows.  Section 1 presents an introduction to the plurality of privacy.  Section 2 argues on behalf of a multi-dimensional taxonomy for privacy theories that would enable us to work with privacy concepts in a more nuanced manner than is typical.  Section 3 presents a categorization of extant theories of privacy according to this taxonomy and Section 4 explicates these theories.  Section 5 offers a brief conclusion about the potential upside of our multi-dimensional approach.

Corey Ciocchetti, Employee Monitoring: Emerging Technologies & Contemporary Issues

Corey Ciocchetti, Employee Monitoring: Emerging Technologies & Contemporary Issues

Comment by: Eileen Ridley

PLSC 2010

Workshop draft abstract:

This project focuses on employment law, modern technology and the monitoring of employees in the private workplace arena.  Today, the vast majority of employers monitor some form of their employees’ e-mail, phone calls, voicemails, instant messages, Internet surfing, workplace activities or offsite, non work-related activities.  United States law allows, and often approves, of such monitoring.  Employers argue that such surveillance is necessary to: (1) avoid legal liability based on inappropriate activities (i.e., Internet gambling) at work and (2) increase employee effectiveness.  Employees and privacy advocates, on the other hand, argue that monitoring has become excessively intrusive and that monitoring invades individual privacy, and thereby, decreases workplace morale.  Tensions rise when excessive monitoring creates a work environment where employees are sneaking in “unapproved” activities during work hours.  Unfortunately, the United States legal system is poorly equipped to deal with the advances in technology driving sophisticated employee monitoring.  This article details the advanced technology being implemented and proposes a modified legal structure whereby the law limits the amount and type of monitoring granted to employers.  The argument is made for a balanced approach that does not excessively hinder workplace efficiency but allows employees to feel more comfortable in the workplace.

 

Ariana R. Levinson, Reconsidering the Electronic Communications Privacy Act as a Source of Employee Privacy from Electronic Monitoring

Ariana R. Levinson, Reconsidering the Electronic Communications Privacy Act as a Source of Employee Privacy from Electronic Monitoring

Comment by: Eileen Ridley

PLSC 2010

Workshop draft abstract:

Scholars generally recognize that new technology has outpaced the law’s ability to protect employees’ privacy from electronic monitoring by employers.  And numerous scholars have commented on the inadequacy of the Electronic Communications Privacy Act (ECPA), the federal law designed to insure privacy of electronic communications, as interpreted by the courts, to protect employees’ privacy from newer forms of electronic monitoring, such as of e-mail. Yet, despite increasing calls from a broad range of entities for stronger privacy protections, passage of new legislation designed to adequately protect employees is, at best, not close at hand, and, at worst, unlikely.

This article, thus, takes a second look at the ECPA as a potential source of some comprehensive protection for employees from employer electronic monitoring.  Several new decisions out of the Ninth Circuit, such as an appellate decision denying summary judgment to a company that contracted with an employer who read an employee’s text messages and a district court decision denying a motion to dismiss for monitoring of keystrokes to discover an employee’s personal e-mail password suggest that new employment trends, such as use of third-party networks, rather than employer networks could lead to greater protection under the act.  They also suggest that increasing incidents of employers intentionally accessing clearly personal communications, sometimes those that would be otherwise attorney-client privileged, could lead to greater protection.  Perhaps the reverse of the saying bad facts make bad law is also true.

Additionally, some of the bases upon which the ECPA has been held not to protect employee communications, such as the definition of interception and the exceptions for consent and for ordinary course of business have likely been differently interpreted by different courts.  Some interpretations are more protective of employee privacy and more consistent with the intent of Congress to protect individual’s privacy from electronic monitoring.

Francesca Bignami, The Non-Americanization of European Regulatory Styles: Data Privacy Regulation in France, Germany, Italy, and Britain

Francesca Bignami, The Non-Americanization of European Regulatory Styles: Data Privacy Regulation in France, Germany, Italy, and Britain

Comment by: Herbert Bukert

PLSC 2010

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1813966

Workshop draft abstract:

European countries have experienced massive structural transformation over the past twenty-five years with the privatization of state-owned industries, the liberalization of markets, and the rise of the European Union.  Yet there is little consensus on how these structural changes have impacted traditional European regulatory styles, generally thought to be informal and flexible compared to the litigation-driven and legalistic American regulatory style.  This article argues that European countries are converging on a model of administration that relies on legalistic regulatory enforcement and that gives market actors extensive opportunities for self-regulation but that otherwise leaves intact earlier regulatory styles.  In particular, contrary to claims of Americanization, litigation remains a relatively insignificant component of the regulatory process.  The explanation for the emerging regulatory model—called “cooperative legalism”— is to be found in the dynamics of Europeanization, which has both facilitated the diffusion of self-regulation from northern to southern countries and put pressure on national governments to demonstrate their commitment to EU policies through legalistic enforcement.  The evidence for this theory is drawn from a structured comparison of data privacy regulation in four countries (France, Britain, Germany, and Italy) and a review of three other policy areas.

Ryan M. Calo, A Hybrid Conception of Privacy Harm

Ryan M. Calo, A Hybrid Conception of Privacy Harm

Comment by: Siva Vaidhynathan

PLSC 2010

Workshop draft abstract:

What counts as a “privacy harm,” particularly?  Today’s thought leaders offer two popular but widely disparate accounts.  On one view—espoused by Richard Parker, Richard Posner, and many others—a privacy harm must involve the literal unwanted sensing of visual or other information by a human being.  Although in respects attractive, this account could exclude everything from the notorious Panopticon (which works precisely because mere uncertainty of observation modifies behavior) to the bulk of contemporary data-mining.

Another leading view, however, may go too far: Dan Solove’s influential “taxonomy of privacy” admits of sixteen, loosely related subcategories of privacy-implicating conduct.  These are selected on the basis of what practices any of the right sorts of authorities—“laws, cases, constitutions, guidelines, and other sources”—have chosen over the years to associate with the term “privacy.”  Solove’s framework includes Jeremy Bentham’s design, but arguably covers a broad range of activities better described in terms of coercion or nuisance.

This essay proposes a third way to think about privacy harm that incorporates the most promising elements of two influential accounts.  Specifically, the essay argues that privacy harm involves either (1) the unwanted perception of observation or (2) the use of information without the subject’s consent to justify an adverse action.  This approach captures the intuition that privacy is basically about observation, but also embraces the many situations in which no actual observation by a person need occur in order to cause a privacy harm.  The essay then walks through a series of thought experiments to defend its approach from anticipated critiques.

Jane Winn, Technical Standards as Information Privacy Regulation

Jane Winn, Technical Standards as Information Privacy Regulation

Comment by: Ed Felten

PLSC 2010

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1118542

Workshop draft abstract:

Most information privacy laws are based on 20th century administrative law models, taking human conduct as the subject of regulation rather than the information architecture.  Such regulations are clearly inadequate to control how computer systems process information, and that inadequacy will become more acute as pervasive computing grows.  Technical standards may serve as a form of administrative law capable of directly targeting the information architecture as the subject of regulation.  A technical standard is defined by ISO as a “document, established by consensus and approved by a recognized body, that provides for common and repeated use, rules, guidelines or characteristics for activities or their results, aimed at the achievement of the optimum degree of order in a given context.”  The authority of technical standards as regulation has been both obscured and legitimated by the role of science and the technocratic professionalism in standard setting processes. More explicit systems for coordinating the work of conventional legal institutions and technical standard setting processes are needed to increase the effectiveness of information privacy laws.  As part of a more general movement away from state regulation and toward enforced self-regulation by the private sector, such explicit systems have already been developed in areas such as product and food safety, and are emerging in information technology arenas.  The Payment Card Industry Data Security Standard is part of a private self-regulatory system based on both legal rules and technical standards. Standardization of privacy impact assessments represents progress toward incorporation of technical standards into the framework of information privacy laws.