Archives

Yang Wang, Pedro Giovanni Leon, Kevin Scott, Xiaoxuan Chen, Alessandro Acquisti, and Lorrie Faith Cranor, Privacy Soft-paternalism: Facebook Users’ Reactions to Privacy Nudges

Yang Wang, Pedro Giovanni Leon, Kevin Scott, Xiaoxuan Chen, Alessandro Acquisti, and Lorrie Faith Cranor, Privacy Soft-paternalism: Facebook Users’ Reactions to Privacy Nudges

Comment by: Andrew Clearwater

PLSC 2013

Workshop draft abstract:

Anecdotal evidence and scholarly research have shown that a significant portion of Internet users experience regrets over disclosures they have made online. To explore ways to help individuals avoid or lessen regrets associated with online mistakes, we employed lessons from behavioral decision research and soft- paternalism to develop three Facebook interfaces that “nudge” users to consider the content and context of their online disclosures more carefully before posting. We implemented three nudging interfaces: profile picture, timer, and timer plus sentiment meter.

The picture nudge was designed to remind Facebook users of which individuals are in the audience for their posts. Depending on the particular post privacy setting, users were shown five profile pictures randomly selected from the pool of those who could see their posts. These profile pictures appeared under the status-updates and comment text boxes when users started typing. The timer

nudge was designed to encourage users to stop and think. The warning message

“You will have 10 seconds to cancel after you post the update” with a yellow background was displayed under the status-updates and comment text boxes when users started typing. After clicking on the “Post’” button, users were given the options to “Cancel” or “Edit” their post before it was automatically published after 10 seconds. The third nudge added a sentiment meter to the timer nudge, and the content of each post was analyzed by our sentiment algorithm. This nudge was designed to help make users more aware of how others might perceive their posts. For posts with a positive or negative score a warning message “Other people may perceive your post as {Very Positive, Positive, Negative, Very Negative}” was displayed during the countdown timer.

We tested these nudges in a 3-week field trial with 21 Facebook users, and conducted 13 follow-up interviews. By triangulating system logs of participants’ behavioral data with results from the exit survey and interviews, we found evidence that the nudges had positive influences on some users’ posting behavior, mitigating unintended disclosures. We also found limitations of the current nudge designs and identified future directions for improvement. Our results suggest that a soft-paternalistic approach to protect people’s privacy on social network sites could be potentially beneficial.

Margot E. Kaminski and Shane Witnov, The VPPA is Dead, Long Live the VPPA: On Legislative Proposals for Protecting Reader and Viewer Privacy

Margot E. Kaminski and Shane Witnov, The VPPA is Dead, Long Live the VPPA: On Legislative Proposals for Protecting Reader and Viewer Privacy

Comment by: Christopher Wolf

PLSC 2013

Workshop draft abstract:

This paper will add to the existing literature on “intellectual privacy.”  It will examine the current state of legislated privacy for readers or viewers of intellectual goods, in the wake of the recent amendments to the federal Video Privacy Protection Act (VPPA).  This renewed analysis is necessary in light of recent political, judicial, technological, and social developments. This paper additionally aims to introduce more history and social science research into the legal literature on intellectual privacy.

Part I: Contribution to discussion of why Intellectual Privacy Matters

The first part of this paper will contribute to the ongoing scholarship on why intellectual privacy is important. It will do so from three perspectives: legal, historical, and sociological and psychological.

Several scholars have examined First Amendment jurisprudence for evidence of protection for readers and their freedom to inquire and develop ideas.  This paper will revisit this jurisprudence with a narrower focus, looking for when courts have supported intellectual privacy in ways more directly relevant to a right to read unobserved.  This section will also discuss at greater length the divide in the First Amendment between protection for speakers and protection for readers and listeners, comparing it to the clear protection for readers that exists in international law.

This paper will additionally add to the intellectual privacy literature with analysis of related recent and forthcoming jurisprudence. In particular,  the Supreme Court recently decided U.S. v. Jones, on GPS location tracking. Several Justices expressed concerns about dragnet surveillance and its implications for associational privacy. The case of Clapper v. Amnesty International USA will likely come down later in 2013, and will have consequences for the Court’s understanding of privacy harms and standing, in both the First and Fourth Amendment contexts.

This paper also aims to introduce more in-depth historical and social science examples into the legal literature. It will discuss historical examples of the effects of totalitarian regimes on creative freedom and speech. And it will examine social science literature on surveillance and chilling effects, and additionally examine social science literature on conditions necessary for creative process.

Part II: Recent technological and social developments

In recent years, a number of significant technological and social developments have taken place related to intellectual privacy. As both Richards and Kaminski noted, in 2011-2012 “frictionless reading” arose on Facebook.  However, by the end of 2012, the Guardian closed its social reader app, and use of the Washington Post’s Social Reader declined by 95 percent.  Additionally in 2012, the novel Fifty Shades of Grey became a best-seller.  The book’s success was credited to the rise of e-book readers and the feeling of privacy readers had because others couldn’t see the book’s cover in public, and the tame covers ultimately chosen by the publishers for the hard copy of the book.  The visible failure of frictionless reading and the success of Fifty Shades of Grey suggest that current social norms support legislation protecting the privacy of reading materials. However, increasingly librarians have been moving away from privacy protection as they move into supplying e-books, suggesting that private ordering by social institutions may no longer be a dependable solution.

A recent technological development has serious implications for intellectual privacy, and has not been adequately discussed. Digital technology now supports surveillance of readers (including students) on an increasingly granular level.  The Kindle can track highlighted passages, and the pace at which one reads. E-textbooks can report back on how much students are reading or paying attention.  Eye trackers can pinpoint exactly where viewers pay attention.  This phenomenon of increased granularity and its significance for the theorizing of intellectual privacy has not been remarked on at any length in existing literature.  This paper will discuss whether and when there is a distinction between purchasing information goods and using such goods to form ideas, and whether additional protection should be afforded to the details of reading behavior.  This paper will thereby attempt to incorporate into intellectual privacy the growing literature on mind reading and self-incrimination.  It will also address the tension between monetization through monitoring, and respect for readers’ privacy.

Part III: Legislation

As Neil M. Richards noted in his recent article on social reading, there were two significant political developments concerning reader/viewer privacy in 2012.  The California Reader Privacy Law was passed in 2012,  but industry also pushed heavily for amendments to the VPPA. Since Richards’ article, the VPPA Amendments Act of 2012 was passed by Congress on December 20, 2012 and currently awaits Presidential signature.

This paper will review and categorize state legislation on library records, bookstores, and video rentals. It will review this legislation with attention to the following: notice requirements, data retention or deletion requirements, consent requirements, protection from consequences such as profiling, limitations on law enforcement behavior, and the creation of private rights of action. The authors of this paper believe that a thorough categorization of these axes will provide a more thorough depiction for legislators than a focus on just notice and consent.

The paper will close with a proposal for state and/or federal legislation supporting intellectual privacy.

Christopher Slobogin, Making the Most of United States v. Jones in a Surveillance Society: A Statutory Implementation of Mosaic Theory

Christopher Slobogin, Making the Most of United States v. Jones in a Surveillance Society: A Statutory Implementation of Mosaic Theory

Comment by: Susan Freiwald

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2098002

Workshop draft abstract:

In the Supreme Court’s recent decision in United States v. Jones, a majority of the Justices appeared to recognize that under some circumstances aggregation of information about an individual through governmental surveillance can amount to a Fourth Amendment search. If adopted by the Court, this notion—sometimes called “mosaic theory”—could bring about a radical change to Fourth Amendment jurisprudence, not just in connection with surveillance of public movements—the issue raised in Jones—but also with respect to the government’s increasingly pervasive record-mining efforts. One reason the Court might avoid the mosaic theory is the perceived difficulty of implementing it. This article provides, in the guise of a model statute, a means of doing so. More specifically, this article explains how proportionality reasoning and political process theory can provide concrete guidance for the courts and police in connection with physical and data surveillance.

A. Michael Froomkin, Privacy Impact Notices

A. Michael Froomkin, Privacy Impact Notices

Comment by: Stuart Shapiro

PLSC 2013

Workshop draft abstract:

The systematic collection of personal data is a big and urgent problem, and the pace of that collection is accelerating as the cost of collection plummets.  Worse, the continued development of data processing technology means that this data can be used and cross-indexed increasingly effectively and cheaply.  Add in the fact the that there is more and more historical data — and self-reported data — to which the sensor data can be linked, and we will soon find ourselves in the equivalent of a digital goldfish bowl.

It is time – or even past time – to do something.  In this paper I suggest we borrow home-grown solutions from US environmental law.   By combining the best features of a number of existing environmental laws and regulations, and — not least — by learning from some of their mistakes, we can craft rules about data collection that would go some significant distance towards stemming the tide of privacy-destroying technologies being, and about to be, deployed.

I propose that we should require Privacy Impact Notices (PINs) before allowing large public or private projects which risk having a substantial impact on personal information privacy or on privacy in public. [“Privacy Impact Statements” would make for better parallelism with Environmental Impact Statements but the plural form of the acronym would be unfortunate.] The PINs requirement would be modeled on existing environmental laws, notably the National Environmental Policy Act of 1969 (NEPA), the law that called into being  the Environmental Impact Statement (EIS).  A PINs rule would be combined with other reporting requirements modeled on the Toxics Release Inventory (TRI). It would also take advantage of progress in ecosystem modeling, particularly the insight that complex systems like ecologies, whether of living things or the data about them, are dynamic systems that must be re-sampled over time in order to understand how they are changing and whether mitigation measures or legal protections are working.

The overarching goals of this regulatory scheme are familiar ones from environmental law and policy-making: to inform the public of decisions being considered (or made) that affect it, to solicit public feedback as plans are designed, and to encourage decision-makers to consider privacy — and public opinion — from an early stage in their design and approval processes.  That was NEPA’s goal, however imperfectly achieved. In addition, however, because the relevant technologies change quickly, and because the accumulation of personal information by those gathering data can have unexpected synergistic effects as we learn new ways of linking previously disparate data sets, we now know from the environmental law and policy experience that it is also important to invest effort in on-going, or at least annual, reporting requirements in order to allow the periodic re-appraisal of the legitimacy and net social utility of the regulated activity (here, data collection programs).

There is an important threshold issue. Privacy regulation today differs from contemporary environmental regulation in one particularly important way: there are relatively few data privacy (or privacy-in-public) -protective laws and rules on the books.  Thus, privacy law today more resembles anti-pollution law before the Clean Air Act or the Clean Water Act. NEPA’s rules are triggered by state action: a government project, or a request to issue a permit.  In order to give the PINs system traction outside of direct governmental data collection, additional regulation reaching private conduct will be required.  That could be direct regulation of large private-sector data gathering or, as a first step, it could be something less effective but easier to legislate such as a rule reaching all government contractors and suppliers.  Legislation could be federal, but it might also be effective at the state level.

The proposals in this paper intersect with active and on-going debates over the value of notice policies.  They build on, but in at least one critical way diverge from, the work of Dennis D. Hirsch, who in 2006 had the important insight — even truer today — that many privacy problems resemble pollution problems and that therefore privacy-protective regulation could profitably be based on the latest learning from environmental law.

Deven Desai, Data Hoarding: Privacy in the Age of Artificial Intelligence

Deven Desai, Data Hoarding: Privacy in the Age of Artificial Intelligence

Comment by: Kirsten Martin

PLSC 2013

Work draft abstract:

We live in an age of data hoarding. Those who have data never wish to release it. Those who don’t have data want to grab it and increase their stores. In both cases—refusing to release data and gathering data—the mosaic theory, which accepts that “seemingly insignificant information may become significant when combined with other information,”1 seems to explain the result. Discussions of mosaic theory focus on executive power. In national security cases the government refuses to share data lest it reveal secrets. Yet recent Fourth Amendment cases limit the state’s ability to gather location data, because under the mosaic theory the aggregate the data could reveal more than what isolated surveillance would reveal.2 The theory describes a problem but yields wildly different results. Worse it does not explain what to do about data collection, retention, and release in different contexts. Furthermore, if data hoarding is a problem for the state, it is one for the private sector too. Private companies, such as Amazon, Google, Facebook, and Wal-Mart, gather and keep as much data as possible, because they wish to learn more about consumers and how to sell to them. Researchers gather and mine data to open new doors in almost every scientific discipline. Like the government, neither group is likely to share the data they collect or increase transparency for in data is power.

I argue that just as we have started to look at the implications of mosaic theory for the state, we must do so for the private sector. So far, privacy scholarship has separated government and private sector data practices. That division is less tenable today. Not only governments, but also companies and scientists assemble digital dossiers. The digital dossiers of just ten years ago emerge faster than ever and with deeper information about us. Individualized data sets matter, but they are now part of something bigger. Large, networked data sets—so-called Big Data—and data mining techniques  simultaneously allow someone to study large groups, to know what an individual has done in the past, and to predict certain future outcomes.3 In all sectors, the vast wave of automatically gathered data points is no longer a barrier to such analysis. Instead, it fuels and improves the analysis, because new systems learn from data sets. Thanks to artificial intelligence, the fantasy of a few data points connecting to and revealing a larger picture may be a reality.

Put differently, discussions about privacy and technology in all contexts miss a simple, yet fundamental, point: artificial intelligence changes everything about privacy. Given that large data sets are here to stay and artificial intelligence techniques promise to revolutionize what we learn from those data sets, the law must understand the rules for these new avenues of information. To address this challenge, I draw on computer science literature to test claims about the harms or benefits of data collection and use. By showing the parallels between state and private sector claims about data and mapping the boundaries of those claims, this Article offers a way to understand and manage what is at stake in the age of pervasive data hoarding and automatic analysis possible with artificial intelligence.


1 Jameel Jaffer, The Mosaic Theory, 77 SOCIAL RESEARCH 873, 873 (2010)

2 See e.g., Orin Kerr, The Mosaic Theory of the Fourth Amendment, 110 MICH. L. REV. __ (2012)

(forthcoming) (criticizing application of mosaic theory to analysis of when collective surveillance steps

constitute a search)

3 See e.g., Hyunyoung Choi and Hal Varian, Predicting the Present with Google Trends, Google, Inc. (April,

2009) available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1659302

Daniel J. Solove and Woodrow Hartzog, The FTC and the New Common Law of Privacy

Daniel J. Solove and Woodrow Hartzog, The FTC and the New Common Law of Privacy

Comment by: Gerald Stegmaier & Chris Jay Hoofnagle

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2312913

Workshop draft comment:

One of the great ironies about information privacy law is that the primary regulation of privacy in the United States is not really law and has barely been studied in a scholarly way.  Since the late 1990s, the Federal Trade Commission (FTC) has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices.  Despite over fifteen years of FTC enforcement, there is no meaningful body of case law to show for it.  The cases have nearly all resulted in settlement agreements.  Nevertheless, companies look to these agreements to guide their decisions regarding privacy practices.  Those involved with helping businesses comply with privacy law – from chief privacy officers to inside counsel to outside counsel – parse and analyze the FTC’s settlement agreements, reports, and activities as if they were pronouncements by the High Court.

In this article, we contend that the FTC’s privacy jurisprudence is the functional equivalent to a body of common law, and we examine it as such.  The FTC has said quite a lot through its actions and settlement agreements. And FTC privacy jurisprudence is the broadest and most influential regulating force on information privacy in United States – more so than nearly any privacy statute and any common law tort.  The statutory law regulating privacy is diffuse and discordant, and the common law torts fail to regulate the majority of activities concerning privacy.  Despite the central governing role of the FTC’s privacy activity, it has not received much scholarly attention.

In Part I of this article, we discuss how the FTC’s actions function practically as a body of common law for privacy.   In the late 1990s, it was far from clear that the body of law regulating privacy policies would come from the FTC and not from traditional contract and promissory estoppel.  Though privacy policies often have all the indicia of enforceable promises, they have rarely been utilized as contracts.  On the few occasions when contract law is invoked for privacy policies, it usually fails. We explore how and why the current state of affairs developed.  In Part II, we examine the principles that emerge from this body of law.  These principles extend far beyond merely honoring promises.   We discuss how these principles compare to principles in other legal domains, such as contract law. In Part III, we explore the implications of these developments and the ways that this body of law could develop.

Helen Nissenbaum, Respect for Context as a Benchmark for Privacy Online: What it is and isn’t

Helen Nissenbaum, Respect for Context as a Benchmark for Privacy Online: What it is and isn’t

Comment by: James Rule

PLSC 2013

Workshop draft abstract:

In February 2012, the Obama White House unveiled a Privacy Bill of Rights within the report, Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Digital Economy, developed by the Department of Commerce, NTIA. Among the Bill of Right’s seven principles, the third, “Respect for Context” was explained as the expectation that “companies will collect, use, and disclose personal data in ways that are consistent with the context in which consumers provide the data.” (p.47) Compared with the other six, which were more recognizable as kin of traditional principles of fair Information practices, such as, for example, the OECD Privacy Guidelines, the principle of respect for Context (PRC) was intriguingly novel.

Generally positive reactions to the White House Report and to the principle of respect-for-context aligned many parties who have disagreed with one another on virtually everything else to do with privacy. That the White House publicly and forcefully acknowledged the privacy problem buoyed those who have worked on it for decades; yet, how far the rallying cry around respect-for-context will push genuine progress is critically dependent on how this principle is interpreted. In short, convergent reactions may be too good to be true if they stand upon divergent interpretations and whether the Privacy Bill of Rights fulfills it promise as a watershed for privacy will depend on which one of these drives regulators to action – public or private. At least, this is the argument my article develops.

Commentaries surrounding the Report reveal five prominent interpretations: a) context as determined by purpose specification; b) context as determined by technology, or platform; c) context as determined by business sector, or industry; d) context as determined by business model; and e) context as determined by social sphere. In the report itself meaning seems to shift from section to section or is left indeterminate but without dwelling too long on what exactly NTIA may or may not have intended my article discusses these five interpretations focusing on what is at stake in adopting any one of them. Arguing that a) and c) would sustain existing stalemates and inertia and that b) and d) though a step forward would not realize the principle’s compelling promise, I defend e), which conceives context as social sphere. Drawing on ideas in Privacy in Context: Technology, Policy, and the Integrity of Social Life (2010), I argue (1) that substantive constraints derived from context-specific informational norms are essential for infusing fairness into purely procedural rule sets; and (2) rule sets that effectively protect privacy depend on a multi-stakeholder process (to which the NTIA is strongly committed), which is truly representative, in turn depends on properly identifying relevant social spheres.