Archives

Jane Bambauer and Derek Bambauer, Vanished

Jane Bambauer and Derek Bambauer, Vanished

Comment by: Eric Goldman

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2326236

Workshop draft abstract:

The conventional wisdom on Internet censorship assumes that the United States government makes fewer attempts to remove and delist content from the Internet than other democracies. Likewise, democratic governments are believed to make fewer attempts to control on-line content than the governments of non-democratic countries. These assumptions are theoretically sound: most democracies have express commitments to the freedom of speech and communication, and the United States has exceptionally strong legal immunities for Internet content providers, along with judicial protection of free speech rights that make it unique even among democracies. However, the conventional wisdom is not entirely correct. A country’s system of governance does not predict well how it will seek to regulate on-line material. And democracies, including the United States, engage in far more extensive censorship of Internet communication than is commonly believed.

This Article explores the gap between free speech rhetoric and practice by analyzing data recently released by Google that describes the official requests or demands to remove content made to the company by a government between 2010 and 2012. Controlling for Internet penetration and Google’s relative market share in each country, we examine international trends in the content removal demands. Specifically, we explore whether some countries have a propensity to use unenforceable requests or demands to remove content, and whether these types of extra-legal requests have increased over time. We also examine trends within content categories to reveal the differences in priorities among governments. For example, European Union governments more frequently seek to remove content for privacy reasons. More surprisingly, the United States government makes many more demands to remove content for defamation, even after controlling for population and Internet penetration.

The Article pays particular attention to government requests to remove content based upon claims regarding privacy, defamation, and copyright enforcement. We make use of more detailed data prepared specially for our study that shows an increase in privacy-related requests following the European Commission’s draft proposal to create a Right To Be Forgotten.

Jane Yakowitz, The New Intrusion

Jane Yakowitz, The New Intrusion

Comment by: Jon Mills

PLSC 2012

Workshop draft abstract:

The tort of intrusion upon seclusion offers the best theory to target legitimate privacy harms in the information age. This Article introduces a new taxonomy that organizes privacy law across four key stages of information flow—observation, capture (the creation of a record), dissemination, and use. Popular privacy proposals place hasty, taxing constraints on dissemination and use. Meanwhile, regulation targeting information flow at its source—at the point of observation—is undertheorized and ripe for prudent expansion.

Intrusion imposes liability for offensive observations. The classic examples involve intruders who gain unauthorized access to information inside the home or surreptitiously intercept telephone conversations, but the concept of seclusion is abstract and flexible. Courts have honored expectations of seclusions in public when the intruder’s efforts to observe were too aggressive and exhaustive. They have also recognized expectations of seclusion in files and records outside the plaintiff’s possession. This article proposes a framework for extending the intrusion tort to new technologies by assigning liability to targeted and offensive observations of the data produced by our gadgets.

Intrusion is a theoretically and constitutionally sound form of privacy protection because the interests in seclusion and respite from social participation run orthogonal to free information flow. Seclusion can be invaded without the production of any new information, and conversely, sensitive new information can become available without intrusion. This puts the intrusion tort in stark contrast with the tort of public disclosure, where the alleged harm is a direct consequence of an increase in knowledge. Since tort liability for intrusion regulates conduct (observation) instead of speech (dissemination), it does not prohibit a person from saying what he already knows, and therefore can coexist comfortably with the bulk of First Amendment jurisprudence.

Felix Wu, Privacy and Utility in Data Sets

Felix Wu, Privacy and Utility in Data Sets

Comment by: Jane Yakowitz

PLSC 2011

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2031808

Workshop draft abstract:

Privacy and utility are inherently in tension with one another.  Information is useful exactly when it allows someone to have knowledge that he would not otherwise have, and to make inferences that he would not otherwise be able to make.  The goal of information privacy is precisely to prevent others from acquiring particular information or from being able to make particular inferences.  Moreover, as others have demonstrated recently, we cannot divide the world into “personal” information to be withheld, and “non-personal” information to be disclosed.  There is potential social value to be gained from disclosing even “personal” information.  And the revelation of even “non-personal” information might provide the final link in a chain of inferences that leads to information we would like to withhold.

Thus, the disclosure of data involves an inherent tradeoff between privacy and utility. More disclosure is both more useful and less private.  Less disclosure is both less useful and more private.  This does not mean, however, that the disclosure of any one piece of information is no different from the disclosure of any other.  Some disclosures may be relatively more privacy invading and less socially useful, or vice versa.  The question is how to identify the privacy and utility characteristics of data, so as to maximize the utility of the data disclosed, and minimize privacy loss.

Thus far, at least two different academic communities have studied the question of analyzing privacy and utility.  In the legal community, this question has come to the fore with recent work on the re-identification of individuals in supposedly anonymized data sets, as well as with questions raised by the behavioral advertising industry’s collection and analysis of consumer data.  In the computer science community, this question has been studied in the context of formal models of privacy, particularly that of “differential privacy.”  This paper seeks to bridge the two communities, to help policy makers understand the implications of the results obtained by formal modeling, and to suggest to computer scientists additional formal approaches that might capture more of the features of the policy questions currently being debated.  We can and should bring to bear both the qualitative analysis of the law and the quantitative analysis of computer science to this increasingly salient question of privacy-utility tradeoffs.