Archives

Aleecia M. McDonald and Wendy Seltzer, Quantifying the Internet’s Erasers: Analysis through Chilling Effects Data

Aleecia M. McDonald and Wendy Seltzer, Quantifying the Internet’s Erasers: Analysis through Chilling Effects Data

PLSC 2013

Workshop draft abstract:

The Internet has turned the childhood threat of a “permanent record” real (though it hasn’t necessarily made that record accurate), causing many to ask whether there’s any hope for privacy once information has entered search engines. In Europe, discussion centers on a “right to be forgotten,” which relies not on removing data, but rather de-indexing it. In the US, a 2012 bill proposed an “Internet Eraser Button,” with a similar theme of de-indexing data for children.

Opponents to these de-indexing approaches have asserted they are technically impossible. And yet, de-indexing sounds remarkably similar to how we already handle some information that corporations would prefer not to share — claimed copyright infringements. The DMCA encourages “information location tools,” including search engines, to respond to takedown notices by removing links, even if the alleged infringement remains online — much as an Eraser Button might operate to de-index data for individuals.

This form of notice and takedown has been operating for more than a decade, and the Chilling Effects Clearinghouse maintains records of such de-index requests to Google and Twitter. In this paper, we use Chilling Effects to quantify the growth of copyright take down notices over time, put forward a set of competing possibilities for comparison with the scale of problem an Internet Eraser Button for Privacy might address. We then project what the results might mean for the technical viability or intractability of addressing individual privacy harms through de-linking.

Wendy Seltzer, Privacy, Feedback, and Option Value

Wendy Seltzer, Privacy, Feedback, and Option Value

Comment by: Michael Zimmer

PLSC 2012

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2032100

Workshop draft abstract:

We have confused intuitions about privacy in public. Sometimes, relying on a rationalist paradigm of secrecy, we say “if you don’t want something published to the world, don’t do it where others can see”: don’t post to Facebook, don’t converse on the public streets. Yet other times, drawing upon experience in natural and constructed social environments, we find that we can have productive interactions in a context of relative, not absolute, privacy: privacy is not binary.

Over time, we have worked out privacy-preserving fixes in architecture, norms, and law: we build walls and windowshades; develop understandings of friendship, trust, and confidentiality; and protect some of these boundaries with the Fourth Amendment, statute, regulation, tort, and contract. The environment provides feedback mechanisms; we adapt to the disclosure problems we experience (individually or societally). We move conversations inside, scold or drop untrustworthy friends, rewrite statutes. Feedback lets us find the boundaries of private contexts and probe the thickness of their membranes.

Technological change throws our intuitions off when we don’t see its privacy impact on a meaningful timescale. We get wrong, limited, or misleading feedback about the publicity of our actions and interactions online and offline. Even if we learn of the possibility of online profiling or constant location tracking, we fail to internalize this notice of publicity because it does not match our in-the-moment experience of semi-privacy. We thus end up with divergence between our understanding and our experience of privacy.

Prior scholarship has approached privacy in public from a few angles: It has identified various interests that fall under the heading of privacy: dignity, confidentiality, secrecy, presentation of self, harm; it has cataloged the legal responses, giving explanations of law’s development and suggestions for its further adaptation. Scholars have theorized privacy, moving beyond the binary of “secret or not secret” to offer contextual and experiential gradients. [1] Often, this scholarship reviews specific problems and situates them in larger context. [2] User studies and economic analysis have improved our understanding of the privacy experience, including the gap between expectations and reality. [3] Computer science and information theory help us quantify some of the elements we refer to as privacy. [4] Finally, design and systems-engineering literature suggest that feedback mechanisms play an important role in the usability and comprehensibility of individual objects and interfaces and in the ability of a system as a whole to reach stable equilibrium.[5]

This article aims to do three things:

1. Introduce a notion of privacy-feedback to bridge the gap between contextual privacy and the dominant secrecy paradigm. Privacy-feedback, through design and social interaction, enables individuals to gauge the publicity of their activities and to modulate their behavior in response.

2. Apply the tools of option value to explain the “harm” of technological and contextual breaches of privacy. The financial modeling of real options helps to describe and quantify the value of choice amid uncertainty. Even without knowing all the potential consequences of data misuse, or which ones will in fact come to pass, we can say that unconsented to data collection deprives the individual of options: to disclose on his or her own terms, and to act inconsistent with disclosed information.

3. Propose a broader framework for architectural regulation, in which technological feedback can enable individual self-regulation to serve as an alternative to command-and-control legal regulation. Feedback then provides a metric for evaluating proposed privacy fixes: does the fix help its users get meaningful feedback about the degree of privacy of their actions? Does it enable them to preserve disclosure options?

Finally, we see privacy-feedback take a larger systemic role. If technology and law fail to offer the choices necessary to protect privacy, we can give meta-feedback, changing the law to do better.


[1]          See Helen Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life, (Stanford Law Books, 2009); Daniel J. Solove, ‘A Taxonomy of Privacy’, 154 U. Pa. L. Rev. 477 (2006); Julie E. Cohen, ‘Examined lives: Informational privacy and the subject as object’, Stan. L. Rev. 52 (1999).

[2]          See Paul Ohm, ‘Broken Promises of privacy: Responding to the Surprising Failure of Anonymization’, 57 UCLA L.Rev. 1701 (2010); Orin S. Kerr, ‘The Fourth Amendment and New Technologies: Constitutional Myths and the Case for Caution’, Mich. L. Rev. 102 (2003); Lawrence Lessig, ‘The Architecture of Privacy’, Vand. J. Ent. L. & Prac. 1 (1999); Jeffrey Rosen, The unwanted gaze: The destruction of privacy in America, (Vintage, 2001).

[3]          See Alessandro Acquisti and Jens Grossklags, ‘Privacy and rationality in individual decision making’, Security & Privacy, IEEE, 3 (2005):1; A.M. McDonald and L.F. Cranor, ‘The cost of reading privacy policies’, ACM Transactions on Computer-Human Interaction, 4 (2008):3; C. Jolls, C.R. Sunstein and R. Thaler, ‘A behavioral approach to law and economics’, Stanford Law Review, (1998); R.H. Thaler and C.R. Sunstein, Nudge: Improving decisions about health, wealth, and happiness, (Yale Univ Pr, 2008).

[4]          See James Gleick, The information: A history, a theory, a flood, (Pantheon, 2011); C.E. Shannon and W. Weaver, The mathematical theory of communication, (University of Illinois Press Urbana, 1962);  Cynthia Dwork, ‘Differential privacy’, Automata, languages and programming, (2006).

[5]          See Donald A. Norman, Emotional Design, (Basic Books, 2004); J.W. Forrester, Industrial dynamics, (MIT Press Cambridge, MA, 1961); Charles Perrow, Normal accidents: Living with high-risk technologies, (Princeton University Press, 1984); H.A. Simon, The sciences of the artificial, (the MIT Press, 1996).

Wendy Seltzer, Privacy, Attention, and Political Community

Wendy Seltzer, Privacy, Attention, and Political Community

Comment by: Stephen Hetcher

PLSC 2010

Workshop draft abstract:

In an era of information overload, some scholars (Lessig, Rosen) have characterized a facet of privacy as a response to the problem of the short attention span: Because onlookers will not spend the time or attention to get the full context of a disclosure, disclosure of some information may produce a distorted view of the subject.  Where others have spoken of privacy as deception (Posner) or a barrier to community governance (Etzioni), I explore privacy-through-limited-disclosure as a constituent of community and political organization.

To organize effectively in a modern liberal democracy, citizens must often aggregate into political groups larger than local or social communities.  Their political organizing (and even common adherence to the political system) can be threatened if differences become more salient than points of common interest — even if those differences are irrelevant to common political goals and outside the political context. Privacy from disclosure may thus be necessary to avoid distraction.

Using John Rawls’s idea of political liberalism an an Overlapping Consensus among groups with different underlying conceptions of the good, I suggest that privacy is an important component of political tolerance and accommodation.  Privacy can support consensus and restore a respect for pluralism even when we lack the time or attention to understand its roots.  As networked communications override some of the traditional architectural support for privacy, we must learn to avert our gaze from glancing disapproval, instead looking deeper or not at all